• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 25
  • 2
  • 2
  • 1
  • Tagged with
  • 40
  • 40
  • 40
  • 17
  • 16
  • 10
  • 10
  • 9
  • 8
  • 8
  • 6
  • 6
  • 5
  • 5
  • 5
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Using AI to improve the effectiveness of turbine performance data

Shreyas Sudarshan Supe (17552379) 06 December 2023 (has links)
<p dir="ltr">For turbocharged engine simulation analysis, manufacturer-provided data are typically used to predict the mass flow and efficiency of the turbine. To create a turbine map, physical tests are performed in labs at various turbine speeds and expansion ratios. These tests can be very expensive and time-consuming. Current testing methods can have limitations that result in errors in the turbine map. As such, only a modest set of data can be generated, all of which have to be interpolated and extrapolated to create a smooth surface that can then be used for simulation analysis.</p><p><br></p><p dir="ltr">The current method used by the manufacturer is a physics-informed polynomial regression model that depends on the Blade Speed Ratio (BSR ) in the polynomial function to model the efficiency and MFP. This method is memory-consuming and provides a lower-than-desired accuracy. This model is decades old and must be updated with new state-of-the-art Machine Learning models to be more competitive. Currently, CTT is facing up to +/-2% error in most turbine maps for efficiency and MFP and the aim is to decrease the error to 0.5% while interpolating the data points in the available region. The current model also extrapolates data to regions where experimental data cannot be measured. Physical tests cannot validate this extrapolation and can only be evaluated using CFD analysis.</p><p><br></p><p dir="ltr">The thesis focuses on investigating different AI techniques to increase the accuracy of the model for interpolation and evaluating the models for extrapolation. The data was made available by CTT. The available data consisted of various turbine parameters including ER, turbine speeds, efficiency, and MFP which were considered significant in turbine modeling. The AI models developed contained the above 4 parameters where ER and turbine speeds are predictors and, efficiency and MFP are the response. Multiple supervised ML models such as SVM, GPR, LMANN, BRANN, and GBPNN were developed and evaluated. From the above 5 ML models, BRANN performed the best achieving an error of 0.5% across multiple turbines for efficiency and MFP. The same model was used to demonstrate extrapolation, where the model gave unreliable predictions. Additional data points were inputted in the training data set at the far end of the testing regions which greatly increased the overall look of the map.</p><p><br></p><p dir="ltr">An additional contribution presented here is to completely predict an expansion ratio line and evaluate with CTT test data points where the model performed with an accuracy of over 95%. Since physical testing in a lab is expensive and time-consuming, another goal of the project was to reduce the number of data points provided for ANN model training. Furthermore, strategically reducing the data points is of utmost importance as some data points play a major role in the training of ANN and can greatly affect the model's overall accuracy. Up to 50% of the data points were removed for training inputs and it was found that BRANN was able to predict a satisfactory turbine map while reducing 20% of the overall data points at various regions.</p>
32

Extending standard outdoor noise propagation models to complex geometries / Extension des modèles standards de propagation du bruit extérieur pour des géométries complexes

Kamrath, Matthew 28 September 2017 (has links)
Les méthodes d'ingénierie acoustique (e.g. ISO 9613-2 ou CNOSSOS-EU) approchent efficacement les niveaux de bruit générés par les routes, les voies ferrées et les sources industrielles en milieu urbain. Cependant, ces approches d'ingénierie sont limitées à des géométries de forme simple, le plus souvent de section rectangulaire. Ce mémoire développe donc, et valide, une approche hybride permettant l'extension des méthodes d'ingénierie à des formes plus complexes, en introduisant un terme d’atténuation supplémentaire qui représente l'effet d'un objet réel comparé à un objet simple.Le calcul de cette atténuation supplémentaire nécessite des calculs de référence, permettant de quantifier la différence entre objets simple et complexe. Dans la mesure où il est trop onéreux, numériquement, '’effectuer ce calcul pour tous les chemins de propagation, l'atténuation supplémentaire est obtenue par interpolation de données stockées dans un tableau et évaluées pour un large jeu de positions de sources, de récepteurs et de fréquences. Dans notre approche, le calcul de référence utilise la méthode BEM en 2.5D, et permet ainsi de produire les niveaux de référence pour les géométries simple et complexe, tout en tabulant leur écart. Sur le principe, d'autres approches de référence pourraient être utilisées.Ce travail valide cette approche hybride pour un écran en forme de T avec un sol rigide, un sol absorbant et un cas avec bâtiments. Ces trois cas démontrent que l'approche hybride est plus précise que l'approche d’ingénierie standard dans des cas complexes. / Noise engineering methods (e.g. ISO 9613-2 or CNOSSOS-EU) efficiently approximate sound levels from roads, railways, and industrial sources in cities. However, engineering methods are limited to only simple box-shaped geometries. This dissertation develops and validates a hybrid method to extend the engineering methods to more complicated geometries by introducing an extra attenuation term that represents the influence of a real object compared to a simplified object.Calculating the extra attenuation term requires reference calculations to quantify the difference between the complex and simplified objects. Since performing a reference computation for each path is too computationally expensive, the extra attenuation term is linearly interpolated from a data table containing the corrections for many source and receiver positions and frequencies. The 2.5D boundary element method produces the levels for the real complex geometry and a simplified geometry, and subtracting these levels yields the corrections in the table.This dissertation validates this hybrid method for a T-barrier with hard ground, soft ground, and buildings. All three cases demonstrate that the hybrid method is more accurate than standard engineering methods for complex cases.
33

Remaining useful life estimation of critical components based on Bayesian Approaches. / Prédiction de l'état de santé des composants critiques à l'aide de l'approche Bayesienne

Mosallam, Ahmed 18 December 2014 (has links)
La construction de modèles de pronostic nécessite la compréhension du processus de dégradation des composants critiques surveillés afin d’estimer correctement leurs durées de fonctionnement avant défaillance. Un processus de d´dégradation peut être modélisé en utilisant des modèles de Connaissance issus des lois de la physique. Cependant, cette approche n´nécessite des compétences Pluridisciplinaires et des moyens expérimentaux importants pour la validation des modèles générés, ce qui n’est pas toujours facile à mettre en place en pratique. Une des alternatives consiste à apprendre le modèle de dégradation à partir de données issues de capteurs installés sur le système. On parle alors d’approche guidée par des données. Dans cette thèse, nous proposons une approche de pronostic guidée par des données. Elle vise à estimer à tout instant l’état de santé du composant physique et prédire sa durée de fonctionnement avant défaillance. Cette approche repose sur deux phases, une phase hors ligne et une phase en ligne. Dans la phase hors ligne, on cherche à sélectionner, parmi l’ensemble des signaux fournis par les capteurs, ceux qui contiennent le plus d’information sur la dégradation. Cela est réalisé en utilisant un algorithme de sélection non supervisé développé dans la thèse. Ensuite, les signaux sélectionnés sont utilisés pour construire différents indicateurs de santé représentant les différents historiques de données (un historique par composant). Dans la phase en ligne, l’approche développée permet d’estimer l’état de santé du composant test en faisant appel au filtre Bayésien discret. Elle permet également de calculer la durée de fonctionnement avant défaillance du composant en utilisant le classifieur k-plus proches voisins (k-NN) et le processus de Gauss pour la régression. La durée de fonctionnement avant défaillance est alors obtenue en comparant l’indicateur de santé courant aux indicateurs de santé appris hors ligne. L’approche développée à été vérifiée sur des données expérimentales issues de la plateforme PRO-NOSTIA sur les roulements ainsi que sur des données fournies par le Prognostic Center of Excellence de la NASA sur les batteries et les turboréacteurs. / Constructing prognostics models rely upon understanding the degradation process of the monitoredcritical components to correctly estimate the remaining useful life (RUL). Traditionally, a degradationprocess is represented in the form of physical or experts models. Such models require extensiveexperimentation and verification that are not always feasible in practice. Another approach that buildsup knowledge about the system degradation over time from component sensor data is known as datadriven. Data driven models require that sufficient historical data have been collected.In this work, a two phases data driven method for RUL prediction is presented. In the offline phase, theproposed method builds on finding variables that contain information about the degradation behaviorusing unsupervised variable selection method. Different health indicators (HI) are constructed fromthe selected variables, which represent the degradation as a function of time, and saved in the offlinedatabase as reference models. In the online phase, the method estimates the degradation state usingdiscrete Bayesian filter. The method finally finds the most similar offline health indicator, to the onlineone, using k-nearest neighbors (k-NN) classifier and Gaussian process regression (GPR) to use it asa RUL estimator. The method is verified using PRONOSTIA bearing as well as battery and turbofanengine degradation data acquired from NASA data repository. The results show the effectiveness ofthe method in predicting the RUL.
34

Image Segmentation, Parametric Study, and Supervised Surrogate Modeling of Image-based Computational Fluid Dynamics

Islam, Md Mahfuzul 05 1900 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / With the recent advancement of computation and imaging technology, Image-based computational fluid dynamics (ICFD) has emerged as a great non-invasive capability to study biomedical flows. These modern technologies increase the potential of computation-aided diagnostics and therapeutics in a patient-specific environment. I studied three components of this image-based computational fluid dynamics process in this work. To ensure accurate medical assessment, realistic computational analysis is needed, for which patient-specific image segmentation of the diseased vessel is of paramount importance. In this work, image segmentation of several human arteries, veins, capillaries, and organs was conducted to use them for further hemodynamic simulations. To accomplish these, several open-source and commercial software packages were implemented. This study incorporates a new computational platform, called InVascular, to quantify the 4D velocity field in image-based pulsatile flows using the Volumetric Lattice Boltzmann Method (VLBM). We also conducted several parametric studies on an idealized case of a 3-D pipe with the dimensions of a human renal artery. We investigated the relationship between stenosis severity and Resistive index (RI). We also explored how pulsatile parameters like heart rate or pulsatile pressure gradient affect RI. As the process of ICFD analysis is based on imaging and other hemodynamic data, it is often time-consuming due to the extensive data processing time. For clinicians to make fast medical decisions regarding their patients, we need rapid and accurate ICFD results. To achieve that, we also developed surrogate models to show the potential of supervised machine learning methods in constructing efficient and precise surrogate models for Hagen-Poiseuille and Womersley flows.
35

Optimal Q-Space Sampling Scheme : Using Gaussian Process Regression and Mutual Information

Hassler, Ture, Berntsson, Jonathan January 2022 (has links)
Diffusion spectrum imaging is a type of diffusion magnetic resonance imaging, capable of capturing very complex tissue structures, but requiring a very large amount of samples in q-space and therefore time.  The purpose of this project was to create and evaluate a new sampling scheme in q-space for diffusion MRI, trying to recreate the ensemble averaged propagator (EAP) with fewer samples without significant loss of quality. The sampling scheme was created by greedily selecting the measurements contributing with the most mutual information. The EAP was then recreated using the sampling scheme and interpolation. The mutual information was approximated using the kernel from a Gaussian process machine learning model.  The project showed limited but promising results on synthetic data, but was highly restricted by the amount of available computational power. Having to resolve to using a lower resolution mesh when calculating the optimal sampling scheme significantly reduced the overall performance.
36

CONSTRUCTION EQUIPMENT FUEL CONSUMPTION DURING IDLING : Characterization using multivariate data analysis at Volvo CE

Hassani, Mujtaba January 2020 (has links)
Human activities have increased the concentration of CO2 into the atmosphere, thus it has caused global warming. Construction equipment are semi-stationary machines and spend at least 30% of its life time during idling. The majority of the construction equipment is diesel powered and emits toxic emission into the environment. In this work, the idling will be investigated through adopting several statistical regressions models to quantify the fuel consumption of construction equipment during idling. The regression models which are studied in this work: Multivariate Linear Regression (ML-R), Support Vector Machine Regression (SVM-R), Gaussian Process regression (GP-R), Artificial Neural Network (ANN), Partial Least Square Regression (PLS-R) and Principal Components Regression (PC-R). Findings show that pre-processing has a significant impact on the goodness of the prediction of the explanatory data analysis in this field. Moreover, through mean centering and application of the max-min scaling feature, the accuracy of models increased remarkably. ANN and GP-R had the highest accuracy (99%), PLS-R was the third accurate model (98% accuracy), ML-R was the fourth-best model (97% accuracy), SVM-R was the fifth-best (73% accuracy) and the lowest accuracy was recorded for PC-R (83% accuracy). The second part of this project estimated the CO2 emission based on the fuel used and by adopting the NONROAD2008 model.  Keywords:
37

IMAGE SEGMENTATION, PARAMETRIC STUDY, AND SUPERVISED SURROGATE MODELING OF IMAGE-BASED COMPUTATIONAL FLUID DYNAMICS

MD MAHFUZUL ISLAM (12455868) 12 July 2022 (has links)
<p>  </p> <p>With the recent advancement of computation and imaging technology, Image-based computational fluid dynamics (ICFD) has emerged as a great non-invasive capability to study biomedical flows. These modern technologies increase the potential of computation-aided diagnostics and therapeutics in a patient-specific environment. I studied three components of this image-based computational fluid dynamics process in this work.</p> <p>To ensure accurate medical assessment, realistic computational analysis is needed, for which patient-specific image segmentation of the diseased vessel is of paramount importance. In this work, image segmentation of several human arteries, veins, capillaries, and organs was conducted to use them for further hemodynamic simulations. To accomplish these, several open-source and commercial software packages were implemented. </p> <p>This study incorporates a new computational platform, called <em>InVascular</em>, to quantify the 4D velocity field in image-based pulsatile flows using the Volumetric Lattice Boltzmann Method (VLBM). We also conducted several parametric studies on an idealized case of a 3-D pipe with the dimensions of a human renal artery. We investigated the relationship between stenosis severity and Resistive index (RI). We also explored how pulsatile parameters like heart rate or pulsatile pressure gradient affect RI.</p> <p>As the process of ICFD analysis is based on imaging and other hemodynamic data, it is often time-consuming due to the extensive data processing time. For clinicians to make fast medical decisions regarding their patients, we need rapid and accurate ICFD results. To achieve that, we also developed surrogate models to show the potential of supervised machine learning methods in constructing efficient and precise surrogate models for Hagen-Poiseuille and Womersley flows.</p>
38

Multi-fidelity Machine Learning for Perovskite Band Gap Predictions

Panayotis Thalis Manganaris (16384500) 16 June 2023 (has links)
<p>A wide range of optoelectronic applications demand semiconductors optimized for purpose.</p> <p>My research focused on data-driven identification of ABX3 Halide perovskite compositions for optimum photovoltaic absorption in solar cells.</p> <p>I trained machine learning models on previously reported datasets of halide perovskite band gaps based on first principles computations performed at different fidelities.</p> <p>Using these, I identified mixtures of candidate constituents at the A, B or X sites of the perovskite supercell which leveraged how mixed perovskite band gaps deviate from the linear interpolations predicted by Vegard's law of mixing to obtain a selection of stable perovskites with band gaps in the ideal range of 1 to 2 eV for visible light spectrum absorption.</p> <p>These models predict the perovskite band gap using the composition and inherent elemental properties as descriptors.</p> <p>This enables accurate, high fidelity prediction and screening of the much larger chemical space from which the data samples were drawn.</p> <p><br></p> <p>I utilized a recently published density functional theory (DFT) dataset of more than 1300 perovskite band gaps from four different levels of theory, added to an experimental perovskite band gap dataset of \textasciitilde{}100 points, to train random forest regression (RFR), Gaussian process regression (GPR), and Sure Independence Screening and Sparsifying Operator (SISSO) regression models, with data fidelity added as one-hot encoded features.</p> <p>I found that RFR yields the best model with a band gap root mean square error of 0.12 eV on the total dataset and 0.15 eV on the experimental points.</p> <p>SISSO provided compound features and functions for direct prediction of band gap, but errors were larger than from RFR and GPR.</p> <p>Additional insights gained from Pearson correlation and Shapley additive explanation (SHAP) analysis of learned descriptors suggest the RFR models performed best because of (a) their focus on identifying and capturing relevant feature interactions and (b) their flexibility to represent nonlinear relationships between such interactions and the band gap.</p> <p>The best model was deployed for predicting experimental band gap of 37785 hypothetical compounds.</p> <p>Based on this, we identified 1251 stable compounds with band gap predicted to be between 1 and 2 eV at experimental accuracy, successfully narrowing the candidates to about 3% of the screened compositions.</p>
39

Implementation of Machine Learning and Internal Temperature Sensors in Nail Penetration Testing of Lithium-ion Batteries

Casey M Jones (9607445) 13 June 2023 (has links)
<p>This work focuses on the collection and analysis of Lithium-ion battery operational and temperature data during nail penetration testing through two different experimental approaches. Raman spectroscopy, machine learning, and internal temperature sensors are used to collect and analyze data to further investigate the effects on cell operation during and after nail penetrations, and the feasibility of using this data to predict future performance.</p> <p><br></p> <p>The first section of this work analyzes the effects on continued operation of a small Lithium-ion prismatic cell after nail penetration. Raman spectroscopy is used to examine the effects on the anode and cathode materials of cells that are cycled for different amounts of time after a nail puncture. Incremental capacity analysis is then used to corroborate the findings from the Raman analysis. The study finds that the operational capacity and lifetime of cells is greatly reduced due to the accelerated degradation caused by loss of material, uneven current distribution, and exposure to atmosphere. This leads into the study of using the magnitude and corresponding voltage of incremental capacity peaks after nail puncture to forecast the operation of damaged cells. A Gaussian process regression is used to predict discharge capacity of different cells that experience the same type of nail puncture. The results from this study show that the method is capable of making accurate predictions of cell discharge capacity even with the higher rate of variance in operation after nail puncture, showing the method of prediction has the potential to be implemented in devices such as battery management systems.</p> <p><br></p> <p>The second section of this work proposes a method of inserting temperature sensors into commercially-available cylindrical cells to directly obtain internal temperature readings. Characterization tests are used to determine the effect on the operability of the modified cells after the sensors are inserted, and lifetime cycle testing is implemented to determine the long-term effects on cell performance. The results show the sensor insertion causes a small reduction in operational performance, and lifetime cycle testing shows the cells can operate near their optimal output for approximately 100-150 cycles. Modified cells are then used to monitor internal temperatures during nail penetration tests and how the amount of aging affects the temperature response. The results show that more aging in a cell causes higher temperatures during nail puncture, as well as a larger difference between internal and external temperatures, due mostly to the larger contribution of Joule heating caused by increased internal resistance.</p>
40

Geometric Uncertainty Analysis of Aerodynamic Shapes Using Multifidelity Monte Carlo Estimation

Triston Andrew Kosloske (15353533) 27 April 2023 (has links)
<p>Uncertainty analysis is of great use both for calculating outputs that are more akin to real<br> flight, and for optimization to more robust shapes. However, implementation of uncertainty<br> has been a longstanding challenge in the field of aerodynamics due to the computational cost<br> of simulations. Geometric uncertainty in particular is often left unexplored in favor of uncer-<br> tainties in freestream parameters, turbulence models, or computational error. Therefore, this<br> work proposes a method of geometric uncertainty analysis for aerodynamic shapes that miti-<br> gates the barriers to its feasible computation. The process takes a two- or three-dimensional<br> shape and utilizes a combination of multifidelity meshes and Gaussian process regression<br> (GPR) surrogates in a multifidelity Monte Carlo (MFMC) algorithm. Multifidelity meshes<br> allow for finer sampling with a given budget, making the surrogates more accurate. GPR<br> surrogates are made practical to use by parameterizing major factors in geometric uncer-<br> tainty with only four variables in 2-D and five in 3-D. In both cases, two parameters control<br> the heights of steps that occur on the top and bottom of airfoils where leading and trailing<br> edge devices are attached. Two more parameters control the height and length of waves<br> that can occur in an ideally smooth shape during manufacturing. A fifth parameter controls<br> the depth of span-wise skin buckling waves along a 3-D wing. Parameters are defined to<br> be uniformly distributed with a maximum size of 0.4 mm and 0.15 mm for steps and waves<br> to remain within common manufacturing tolerances. The analysis chain is demonstrated<br> with two test cases. The first, the RAE2822 airfoil, uses transonic freestream parameters<br> set by the ADODG Benchmark Case 2. The results show a mean drag of nearly 10 counts<br> above the deterministic case with fixed lift, and a 2 count increase for a fixed angle of attack<br> version of the case. Each case also has small variations in lift and angle of attack of about<br> 0.5 counts and 0.08◦, respectively. Variances for each of the three tracked outputs show that<br> more variability is possible, and even likely. The ONERA M6 transonic wing, popular due<br> to the extensive experimental data available for computational validation, is the second test<br> case. Variation is found to be less substantial here, with a mean drag increase of 0.5 counts,<br> and a mean lift increase of 0.1 counts. Furthermore, the MFMC algorithm enables accurate<br> results with only a few hours of wall time in addition to GPR training. </p>

Page generated in 0.1495 seconds