• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 3
  • 1
  • Tagged with
  • 6
  • 6
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Machine Learning to Discover and Optimize Materials

Rosenbrock, Conrad Waldhar 01 December 2017 (has links)
For centuries, scientists have dreamed of creating materials by design. Rather than discovery by accident, bespoke materials could be tailored to fulfill specific technological needs. Quantum theory and computational methods are essentially equal to the task, and computational power is the new bottleneck. Machine learning has the potential to solve that problem by approximating material behavior at multiple length scales. A full end-to-end solution must allow us to approximate the quantum mechanics, microstructure and engineering tasks well enough to be predictive in the real world. In this dissertation, I present algorithms and methodology to address some of these problems at various length scales. In the realm of enumeration, systems with many degrees of freedom such as high-entropy alloys may contain prohibitively many unique possibilities so that enumerating all of them would exhaust available compute memory. One possible way to address this problem is to know in advance how many possibilities there are so that the user can reduce their search space by restricting the occupation of certain lattice sites. Although tools to calculate this number were available, none performed well for very large systems and none could easily be integrated into low-level languages for use in existing scientific codes. I present an algorithm to solve these problems. Testing the robustness of machine-learned models is an essential component in any materials discovery or optimization application. While it is customary to perform a small number of system-specific tests to validate an approach, this may be insufficient in many cases. In particular, for Cluster Expansion models, the expansion may not converge quickly enough to be useful and reliable. Although the method has been used for decades, a rigorous investigation across many systems to determine when CE "breaks" was still lacking. This dissertation includes this investigation along with heuristics that use only a small training database to predict whether a model is worth pursuing in detail. To be useful, computational materials discovery must lead to experimental validation. However, experiments are difficult due to sample purity, environmental effects and a host of other considerations. In many cases, it is difficult to connect theory to experiment because computation is deterministic. By combining advanced group theory with machine learning, we created a new tool that bridges the gap between experiment and theory so that experimental and computed phase diagrams can be harmonized. Grain boundaries in real materials control many important material properties such as corrosion, thermal conductivity, and creep. Because of their high dimensionality, learning the underlying physics to optimizing grain boundaries is extremely complex. By leveraging a mathematically rigorous representation for local atomic environments, machine learning becomes a powerful tool to approximate properties for grain boundaries. But it also goes beyond predicting properties by highlighting those atomic environments that are most important for influencing the boundary properties. This provides an immense dimensionality reduction that empowers grain boundary scientists to know where to look for deeper physical insights.
2

Turbulent Flow Analysis and Coherent Structure Identification in Experimental Models with Complex Geometries

Amini, Noushin 2011 December 1900 (has links)
Turbulent flows and coherent structures emerging within turbulent flow fields have been extensively studied for the past few decades and a wide variety of experimental and numerical techniques have been developed for measurement and analysis of turbulent flows. The complex nature of turbulence requires methods that can accurately estimate its highly chaotic spatial and temporal behavior. Some of the classical cases of turbulent flows with simpler geometries have been well characterized by means of the existing experimental techniques and numerical models. Nevertheless, since most turbulent fields are of complex geometries; there is an increasing interest in the study of turbulent flows through models with more complicated geometries. In this dissertation, characteristics of turbulent flows through two different facilities with complex geometries are studied applying two different experimental methods. The first study involves the investigation of turbulent impinging jets through a staggered array of rods with or without crossflow. Such flows are crucial in various engineering disciplines. This experiment aimed at modeling the coolant flow behavior and mixing phenomena within the lower plenum of a Very High Temperature Reactor (VHTR). Dynamic Particle Image Velocimetry (PIV) and Matched Index of Refraction (MIR) techniques were applied to acquire the turbulent velocity fields within the model. Some key flow features that may significantly enhance the flow mixing within the test section or actively affect some of the structural components were identified in the velocity fields. The evolution of coherent structures within the flow field is further investigated using a Snapshot Proper Orthogonal Decomposition (POD) technique. Furthermore, a comparative POD method is proposed and successfully implemented for identification of the smaller but highly influential coherent structures which may not be captured in the full-field POD analysis. The second experimental study portrays the coolant flow through the core of an annular pebble bed VHTR. The complex geometry of the core and the highly turbulent nature of the coolant flow passing through the gaps of fuel pebbles make this case quite challenging. In this experiment, a high frequency Hot Wire Anemometry (HWA) system is applied for velocity measurements and investigation of the bypass flow phenomena within the near wall gaps of the core. The velocity profiles within the gaps verify the presence of an area of increased velocity close to the outer reflector wall; however, the characteristics of the coolant flow profile is highly dependent on the gap geometry and to a less extent on the Reynolds number of the flow. The time histories of the velocity are further analyzed using a Power Spectra Density (PSD) technique to acquire information about the energy content and energy transfer between eddies of different sizes at each point within the gaps.
3

Continuous time and space identification : An identification process based on Chebyshev polynomials expansion for monitoring on continuous structure / Réseaux de capteurs adaptatifs pour structures/machines intelligentes

Chochol, Catherine 01 October 2013 (has links)
La méthode d'identification développée dans cette thèse est inspirée des travaux de D. Rémond. On considérera les données d'entrée suivante : la réponse de la structure, qui sera mesurée de manière discrète, et qui dépendra des dimensions de la structure (temps, espace) le modèle de comportement, qui sera exprimé sous forme d'une équation différentielle ou d'une équation aux dérivées partielles, les conditions aux limites ainsi que la source d'excitation seront considérées comme non mesurées, ou inconnues. La procédure d'identification est composée de trois étapes : la projection sur une base polynomiale orthogonale (polynômes de Chebyshev) du signal mesuré, la différentiation du signal mesuré, l'estimation de paramètres, en transformant l'équation de comportement en une équation algébrique. La poutre de Bernoulli a permis d'établir un lien entre l'ordre de troncature de la base polynomiale et le nombre d'ondes contenu dans le signal projeté. Sur un signal bruité, nous avons pu établir une valeur de nombre d'onde et d'ordre de troncature minimum pour assurer une estimation précise du paramètre à identifier. Grâce à l'exemple de la poutre de Timoshenko, nous avons pu réadapter la procédure d'identification à l'estimation de plusieurs paramètres. Trois paramètres dont les valeurs ont des ordres radicalement différents ont été estimés. Cet exemple illustre également la stratégie de régularisation à adopter avec ce type de problèmes. L'estimation de l'amortissement sur une poutre a été réalisée avec succès, que ce soit à l'aide de sa réponse transitoire ou à l'aide du régime établi. Le cas bidimensionnel de la plaque a également été traité. Il a permis d'établir un lien similaire au cas de la poutre de Bernoulli entre le nombre d'onde et l'ordre de troncature. Deux cas d'applications expérimentales ont été traités au cours de cette thèse. Le premier se base sur le modèle de la poutre de Bernoulli, appliqué à la détection de défaut. En effet on applique un procédé d'identification ayant pour hypothèse initiale la continuité de la structure. Dans le cas où celle-ci ne le serait pas on s'attend à observer une valeur aberrante du paramètre reconstruit. Le procédé permet de localiser avec succès le lieu de la discontinuité. Le second cas applicatif vise à reconstruire l'amortissement d'une structure 2D : une plaque libre-libre. On compare les résultats obtenus à l'aide de notre procédé d'identification à ceux obtenus par Ablitzer à l'aide de la méthode RIFF. Les deux méthodes permettent d'obtenir des résultats sensiblement proches. / The purpose of this work is to adapt and improve the continuous time identification method proposed by D. Rémond for continuous structures. D. Rémond clearly separated this identification method into three steps: signal expansion, signal differentiation and parameter estimation. In this study, both expansion and differentiation steps are drastically improved. An original differentiation method is developed and adapted to partial differentiation. The existing identification process is firstly adapted to continuous structure. Then the expansion and differentiation principle are presented. For this identification purpose a novel differentiation model was proposed. The aim of this novel operator was to limit the sensitivity of the method to the tuning parameter (truncation number). The precision enhancement using this novel operator was highlighted through different examples. An interesting property of Chebyshev polynomials was also brought to the fore : the use of an exact discrete expansion with the polynomials Gauss points. The Gauss points permit an accurate identification using a restricted number of sensors, limiting de facto the signal acquisition duration. In order to reduce the noise sensitivity of the method, a regularization step was added. This regularization step, named the instrumental variable, was inspired from the automation domain. The instrumental variable works as a filter. The identified parameter is recursively filtered through the structure model. The final result is the optimal parameter estimation for a given model. Different numerical applications are depicted. A focus is made on different practical particularities, such as the use of the steady-state response, the identification of multiple parameters, etc. The first experimental application is a crack detection on a beam. The second experimental application is the identification of damping on a plate.
4

Nested Noun Phrase Detection in English Text with BERT

Misra, Shweta January 2023 (has links)
In this project, we address the task of nested noun phrase identification in English sentences, where a phrase is defined as a group of words functioning as one unit in a sentence. Prior research has extensively explored the identification of various phrases for language understanding and text generation tasks. Our aim is to tackle the novel challenge of identifying nested noun phrases within sentences. To accomplish this, we first review existing work on related topics such as partial parsing and noun phrase identification. Subsequently, we propose a novel approach based on transformer models to recursively identify nested noun phrases in sentences. We fine-tune a pre-trained uncased BERT model to detect phrase structures in a sentence and determine whether they represent noun phrases. Our recursive approach involves merging relevant segments of a sentence and assigning labels to the noun phrases at each step, facilitating the identification of nested structures. The evaluation of our model demonstrates promising results, achieving a high accuracy of up to 93.6% when considering all noun phrases in isolation and 90.9% when accounting for the predicted phrase structure of the sentence. Additionally, our model exhibits a recall rate of 83.5% and 81.2% at both levels, respectively. Overall, our model proves to be effective in identifying nested noun phrases, showcasing the potential of transformer-based models in phrase structure identification. Future research should explore further applications and enhancements of such models in this domain. / I detta projekt tar vi upp uppgiften att identifiera nästlade substantivfraser i engelska meningar, där en fras definieras som en grupp ord som fungerar som en enhet i en mening. Tidigare forskning har utförligt utforskat identifieringen av olika fraser för språkförståelse och textgenereringsuppgifter. Vårt mål är att ta itu med den nya utmaningen att identifiera nästlade substantivfraser i meningar. För att åstadkomma detta granskar vi först befintligt arbete med relaterade ämnen som partiell analys och identifiering av substantivfraser. Därefter föreslår vi en ny metod baserad på transformers-modeller för att rekursivt identifiera nästlade substantivfraser i meningar. Vi finjusterar en förtränad BERT-modell utan kapsling för att upptäcka frasstrukturer i en mening och avgöra om de representerar substantivfraser. Vårt rekursiva tillvägagångssätt innebär att sammanfoga relevanta segment av en mening och att tilldela etiketter till substantivfraserna vid varje steg, vilket underlättar identifieringen av nästlade strukturer. Utvärderingen av vår modell visar lovande resultat och uppnår en hög precision på upp till 93,6% när man tar hänsyn till alla substantivfraser isolerat och 90,9% när man tar hänsyn till meningens förutsagda frasstruktur. Dessutom uppvisar vår modell en täckning (recall) på 83,5% respektive 81,2% på båda nivåerna. Sammantaget visar vår modell sig vara effektiv för att identifiera nästlade substantivfraser, vilket visar potentialen hos transformers-modeller för identifiering av frasstruktur. Framtida forskning bör utforska ytterligare tillämpningar och förbättringar av sådana modeller på detta område.
5

Block-sparse models in multi-modality : application to the inverse model in EEG/MEG / Des modèles bloc-parcimonieux en multi-modalité : application au problème inverse en EEG/MEG

Afdideh, Fardin 12 October 2018 (has links)
De nombreux phénomènes naturels sont trop complexes pour être pleinement reconnus par un seul instrument de mesure ou par une seule modalité. Par conséquent, le domaine de recherche de la multi-modalité a émergé pour mieux identifier les caractéristiques riches du phénomène naturel de la multi-propriété naturelle, en analysant conjointement les données collectées à partir d’uniques modalités, qui sont en quelque sorte complémentaires. Dans notre étude, le phénomène d’intérêt multi-propriétés est l’activité du cerveau humain et nous nous intéressons à mieux la localiser au moyen de ses propriétés électromagnétiques, mesurables de manière non invasive. En neurophysiologie, l’électroencéphalographie (EEG) et la magnétoencéphalographie (MEG) constituent un moyen courant de mesurer les propriétés électriques et magnétiques de l’activité cérébrale. Notre application dans le monde réel, à savoir le problème de reconstruction de source EEG / MEG, est un problème fondamental en neurosciences, allant des sciences cognitives à la neuropathologie en passant par la planification chirurgicale. Considérant que le problème de reconstruction de source EEG /MEG peut être reformulé en un système d’équations linéaires sous-déterminé, la solution (l’activité estimée de la source cérébrale) doit être suffisamment parcimonieuse pour pouvoir être récupérée de manière unique. La quantité de parcimonie est déterminée par les conditions dites de récupération. Cependant, dans les problèmes de grande dimension, les conditions de récupération conventionnelles sont extrêmement strictes. En regroupant les colonnes cohérentes d’un dictionnaire, on pourrait obtenir une structure plus incohérente. Cette stratégie a été proposée en tant que cadre d’identification de structure de bloc, ce qui aboutit à la segmentation automatique de l’espace source du cerveau, sans utiliser aucune information sur l’activité des sources du cerveau et les signaux EEG / MEG. En dépit du dictionnaire structuré en blocs moins cohérent qui en a résulté, la condition de récupération conventionnelle n’est plus en mesure de calculer la caractérisation de la cohérence. Afin de relever le défi mentionné, le cadre général des conditions de récupération exactes par bloc-parcimonie, comprenant trois conditions théoriques et une condition dépendante de l’algorithme, a été proposé. Enfin, nous avons étudié la multi-modalité EEG et MEG et montré qu’en combinant les deux modalités, des régions cérébrales plus raffinées sont apparues / Three main challenges have been addressed in this thesis, in three chapters.First challenge is about the ineffectiveness of some classic methods in high-dimensional problems. This challenge is partially addressed through the idea of clustering the coherent parts of a dictionary based on the proposed characterisation, in order to create more incoherent atomic entities in the dictionary, which is proposed as a block structure identification framework. The more incoherent atomic entities, the more improvement in the exact recovery conditions. In addition, we applied the mentioned clustering idea to real-world EEG/MEG leadfields to segment the brain source space, without using any information about the brain sources activity and EEG/MEG signals. Second challenge raises when classic recovery conditions cannot be established for the new concept of constraint, i.e., block-sparsity. Therefore, as the second research orientation, we developed a general framework for block-sparse exact recovery conditions, i.e., four theoretical and one algorithmic-dependent conditions, which ensure the uniqueness of the block-sparse solution of corresponding weighted mixed-norm optimisation problem in an underdetermined system of linear equations. The mentioned generality of the framework is in terms of the properties of the underdetermined system of linear equations, extracted dictionary characterisations, optimisation problems, and ultimately the recovery conditions. Finally, the combination of different information of a same phenomenon is the subject of the third challenge, which is addressed in the last part of dissertation with application to brain source space segmentation. More precisely, we showed that by combining the EEG and MEG leadfields and gaining the electromagnetic properties of the head, more refined brain regions appeared.
6

THE DEVELOPMENT OF MASS SPECTROMETRIC METHODS FOR THE DETERMINATION OF THE CHEMICAL COMPOSITION OF COMPLEX MIXTURES RELEVANT TO THE ENERGY SECTOR AND THE DEVELOPMENT OF A NEW DEVICE FOR CHEMICALLY ENHANCED OIL RECOVERY FORMULATION EVALUATION

Katherine Elisabeth Wehde (8054564) 28 November 2019 (has links)
<p>This dissertation focused on the development of mass spectrometric methodologies, separation techniques, and engineered devices for the optimal analysis of complex mixtures relevant to the energy sector, such as alternative fuels, petroleum-based fuels, crude oils, and processed base oils. Mass spectrometry (MS) has been widely recognized as a powerful tool for the analysis of complex mixtures. In complex energy samples, such as petroleum-based fuels, alternative fuels, and oils, high-resolution MS alone may not be sufficient to elucidate chemical composition information. Separation before MS analysis is often necessary for such highly complex energy samples. For volatile samples, in-line two-dimensional gas chromatography (GC×GC) can be used to separate complex mixtures prior to ionization. This technique allows for a more accurate determination of the compounds in a mixture, by simplifying the mixture into its components prior to ionization, separation based on mass-to-charge ratio (<i>m/z</i>), and detection. A GC×GC coupled to a high-resolution time-of-flight MS was utilized in this research to determine the chemical composition of alternative aviation fuels, a petroleum-based aviation fuel, and alternative aviation fuel candidates and blending components as well as processed base oils.</p> Additionally, as the cutting edge of science and technology evolve, methods and equipment must be updated and adapted for new samples or new sector demands. One such case, explored in this dissertation, was the validation of an updated standardized method, ASTM D2425 2019. This updated standardized method was investigated for a new instrument and new sample type for a quadrupole MS to analyze a renewable aviation fuel. Lastly, the development and evaluation of a miniaturized coreflood device for analyzing candidate chemically enhanced oil recovery (cEOR) formulations of brine, surfactant(s), and polymer(s) was conducted. The miniaturized device was used in the evaluation of two different cEOR formulations to determine if the components of the recovered oil changed.

Page generated in 0.1505 seconds