171 |
Using AI to improve the effectiveness of turbine performance dataShreyas Sudarshan Supe (17552379) 06 December 2023 (has links)
<p dir="ltr">For turbocharged engine simulation analysis, manufacturer-provided data are typically used to predict the mass flow and efficiency of the turbine. To create a turbine map, physical tests are performed in labs at various turbine speeds and expansion ratios. These tests can be very expensive and time-consuming. Current testing methods can have limitations that result in errors in the turbine map. As such, only a modest set of data can be generated, all of which have to be interpolated and extrapolated to create a smooth surface that can then be used for simulation analysis.</p><p><br></p><p dir="ltr">The current method used by the manufacturer is a physics-informed polynomial regression model that depends on the Blade Speed Ratio (BSR ) in the polynomial function to model the efficiency and MFP. This method is memory-consuming and provides a lower-than-desired accuracy. This model is decades old and must be updated with new state-of-the-art Machine Learning models to be more competitive. Currently, CTT is facing up to +/-2% error in most turbine maps for efficiency and MFP and the aim is to decrease the error to 0.5% while interpolating the data points in the available region. The current model also extrapolates data to regions where experimental data cannot be measured. Physical tests cannot validate this extrapolation and can only be evaluated using CFD analysis.</p><p><br></p><p dir="ltr">The thesis focuses on investigating different AI techniques to increase the accuracy of the model for interpolation and evaluating the models for extrapolation. The data was made available by CTT. The available data consisted of various turbine parameters including ER, turbine speeds, efficiency, and MFP which were considered significant in turbine modeling. The AI models developed contained the above 4 parameters where ER and turbine speeds are predictors and, efficiency and MFP are the response. Multiple supervised ML models such as SVM, GPR, LMANN, BRANN, and GBPNN were developed and evaluated. From the above 5 ML models, BRANN performed the best achieving an error of 0.5% across multiple turbines for efficiency and MFP. The same model was used to demonstrate extrapolation, where the model gave unreliable predictions. Additional data points were inputted in the training data set at the far end of the testing regions which greatly increased the overall look of the map.</p><p><br></p><p dir="ltr">An additional contribution presented here is to completely predict an expansion ratio line and evaluate with CTT test data points where the model performed with an accuracy of over 95%. Since physical testing in a lab is expensive and time-consuming, another goal of the project was to reduce the number of data points provided for ANN model training. Furthermore, strategically reducing the data points is of utmost importance as some data points play a major role in the training of ANN and can greatly affect the model's overall accuracy. Up to 50% of the data points were removed for training inputs and it was found that BRANN was able to predict a satisfactory turbine map while reducing 20% of the overall data points at various regions.</p>
|
172 |
Simulation and optimization of steam-cracking processes / Simulation et optimisation des procédés de craquage thermiqueCampet, Robin 17 January 2019 (has links)
Le procédé de craquage thermique est un procédé industriel sensible aux conditions de température et de pression. L’utilisation de réacteurs aux parois nervurées est une méthode permettant d’améliorer la sélectivité chimique du procédé en augmentant considérablement les transferts de chaleur. Cependant, cette méthode induit une augmentation des pertes de charge dans le réacteur, ce qui est dommageable pour le rendement chimique et doit être quantifié. En raison de la complexité de l’écoulement turbulent et de la cinétique chimique, le gain réel offert par ces géométries en termes de sélectivité chimique est toutefois mal connu et difficile à estimer, d’autant plus que des mesures expérimentales détaillées sont très rares et difficiles à mener. L’objectif de ce travail est double: d’une part évaluer le gain réel des parois nervurées sur le rendement chimique; d’autre part proposer de nouveaux designs de réacteurs offrant une sélectivité chimique optimale. Ceci est rendu possible par l’approche de simulation numérique aux grandes échelles (LES), qui est utilisée pour étudier l’écoulement réactif à l’intérieur de diverses géométries de réacteurs. Le code AVBP, qui résout les équations de Navier Stokes compressibles pour les écoulements turbulents, est utilisé pour simuler le procédé grâce à une méthodologie numérique adaptée. En particulier, les effets des pertes de charge et du transfert thermique sur la conversion chimique sont comparés pour un réacteur lisse et un réacteur nervuré afin de quantifier l’impact de la rugosité de paroi dans des conditions d’utilisation industrielles. Une méthodologie d’optimisation du design des réacteurs, basée sur plusieurs simulations numériques et les processus Gaussiens, est finalement mise au point et utilisée pour aboutir à un design de réacteur de craquage thermique innovant, maximisant le rendement chimique / Thermal cracking is an industrial process sensitive to both temperature and pressure operating conditions. The use of internally ribbed reactors is a passive method to enhance the chemical selectivity of the process, thanks to a significant increase of heat transfer. However, this method also induces an increase in pressure loss, which is damageable to the chemical yield and must be quantified. Because of the complexity of turbulence and chemical kinetics, and as detailed experimental measurements are difficult to conduct, the real advantage of such geometries in terms of selectivity is however poorly known and difficult to assess. This work aims both at evaluating the real benefits of internally ribbed reactors in terms of chemical yields and at proposing innovative and optimized reactor designs. This is made possible using the Large Eddy Simulation (LES) approach, which allows to study in detail the reactive flow inside several reactor geometries. The AVBP code, which solves the Navier-Stokes compressible equations for turbulent flows, is used in order to simulate thermal cracking thanks to a dedicated numerical methodology. In particular, the effect of pressure loss and heat transfer on chemical conversion is compared for both a smooth and a ribbed reactor in order to conclude about the impact of wall roughness in industrial operating conditions. An optimization methodology, based on series of LES and Gaussian process, is finally developed and an innovative reactor design for thermal cracking applications, which maximizes the chemical yield, is proposed
|
173 |
Métodos de Monte Carlo Hamiltoniano na inferência Bayesiana não-paramétrica de valores extremos / Monte Carlo Hamiltonian methods in non-parametric Bayesian inference of extreme valuesHartmann, Marcelo 09 March 2015 (has links)
Neste trabalho propomos uma abordagem Bayesiana não-paramétrica para a modelagem de dados com comportamento extremo. Tratamos o parâmetro de locação μ da distribuição generalizada de valor extremo como uma função aleatória e assumimos um processo Gaussiano para tal função (Rasmussem & Williams 2006). Esta situação leva à intratabilidade analítica da distribuição a posteriori de alta dimensão. Para lidar com este problema fazemos uso do método Hamiltoniano de Monte Carlo em variedade Riemanniana que permite a simulação de valores da distribuição a posteriori com forma complexa e estrutura de correlação incomum (Calderhead & Girolami 2011). Além disso, propomos um modelo de série temporal autoregressivo de ordem p, assumindo a distribuição generalizada de valor extremo para o ruído e determinamos a respectiva matriz de informação de Fisher. No decorrer de todo o trabalho, estudamos a qualidade do algoritmo em suas variantes através de simulações computacionais e apresentamos vários exemplos com dados reais e simulados. / In this work we propose a Bayesian nonparametric approach for modeling extreme value data. We treat the location parameter μ of the generalized extreme value distribution as a random function following a Gaussian process model (Rasmussem & Williams 2006). This configuration leads to no closed-form expressions for the highdimensional posterior distribution. To tackle this problem we use the Riemannian Manifold Hamiltonian Monte Carlo algorithm which allows samples from the posterior distribution with complex form and non-usual correlation structure (Calderhead & Girolami 2011). Moreover, we propose an autoregressive time series model assuming the generalized extreme value distribution for the noise and obtained its Fisher information matrix. Throughout this work we employ some computational simulation studies to assess the performance of the algorithm in its variants and show many examples with simulated and real data-sets.
|
174 |
Road features detection and sparse map-based vehicle localization in urban environments / Detecção de características de rua e localização de veículos em ambientes urbanos baseada em mapas esparsosHata, Alberto Yukinobu 13 December 2016 (has links)
Localization is one of the fundamental components of autonomous vehicles by enabling tasks as overtaking, lane keeping and self-navigation. Urban canyons and bad weather interfere with the reception of GPS satellite signal which prohibits the exclusive use of such technology for vehicle localization in urban places. Alternatively, map-aided localization methods have been employed to enable position estimation without the dependence on GPS devices. In this solution, the vehicle position is given as the place that best matches the sensor measurement to the environment map. Before building the maps, feature sof the environment must be extracted from sensor measurements. In vehicle localization, curbs and road markings have been extensively employed as mapping features. However, most of the urban mapping methods rely on a street free of obstacles or require repetitive measurements of the same place to avoid occlusions. The construction of an accurate representation of the environment is necessary for a proper match of sensor measurements to the map during localization. To prevent the necessity of a manual process to remove occluding obstacles and unobserved areas, a vehicle localization method that supports maps built from partial observations of the environment is proposed. In this localization system,maps are formed by curb and road markings extracted from multilayer laser sensor measurements. Curb structures are detected even in the presence of vehicles that occlude the roadsides, thanks to the use of robust regression. Road markings detector employs Otsu thresholding to analyze infrared remittance data which makes the method insensitive to illumination. Detected road features are stored in two map representations: occupancy grid map (OGM) and Gaussian process occupancy map (GPOM). The first approach is a popular map structure that represents the environment through fine-grained grids. The second approach is a continuous representation that can estimate the occupancy of unseen areas. The Monte Carlo localization (MCL) method was adapted to support the obtained maps of the urban environment. In this sense, vehicle localization was tested in an MCL that supports OGM and an MCL that supports GPOM. Precisely, for MCL based on GPOM, a new measurement likelihood based on multivariate normal probability density function is formulated. Experiments were performed in real urban environments. Maps were built using sparse laser data to verify there ronstruction of non-observed areas. The localization system was evaluated by comparing the results with a high precision GPS device. Results were also compared with localization based on OGM. / No contexto de veículos autônomos, a localização é um dos componentes fundamentais, pois possibilita tarefas como ultrapassagem, direção assistida e navegação autônoma. A presença de edifícios e o mau tempo interferem na recepção do sinal de GPS que consequentemente dificulta o uso de tal tecnologia para a localização de veículos dentro das cidades. Alternativamente, a localização com suporte aos mapas vem sendo empregada para estimar a posição sem a dependência do GPS. Nesta solução, a posição do veículo é dada pela região em que ocorre a melhor correspondência entre o mapa do ambiente e a leitura do sensor. Antes da criação dos mapas, características dos ambientes devem ser extraídas a partir das leituras dos sensores. Dessa forma, guias e sinalizações horizontais têm sido largamente utilizados para o mapeamento. Entretanto, métodos de mapeamento urbano geralmente necessitam de repetidas leituras do mesmo lugar para compensar as oclusões. A construção de representações precisas dos ambientes é essencial para uma adequada associação dos dados dos sensores como mapa durante a localização. De forma a evitar a necessidade de um processo manual para remover obstáculos que causam oclusão e áreas não observadas, propõe-se um método de localização de veículos com suporte aos mapas construídos a partir de observações parciais do ambiente. No sistema de localização proposto, os mapas são construídos a partir de guias e sinalizações horizontais extraídas a partir de leituras de um sensor multicamadas. As guias podem ser detectadas mesmo na presença de veículos que obstruem a percepção das ruas, por meio do uso de regressão robusta. Na detecção de sinalizações horizontais é empregado o método de limiarização por Otsu que analisa dados de reflexão infravermelho, o que torna o método insensível à variação de luminosidade. Dois tipos de mapas são empregados para a representação das guias e das sinalizações horizontais: mapa de grade de ocupação (OGM) e mapa de ocupação por processo Gaussiano (GPOM). O OGM é uma estrutura que representa o ambiente por meio de uma grade reticulada. OGPOM é uma representação contínua que possibilita a estimação de áreas não observadas. O método de localização por Monte Carlo (MCL) foi adaptado para suportar os mapas construídos. Dessa forma, a localização de veículos foi testada em MCL com suporte ao OGM e MCL com suporte ao GPOM. No caso do MCL baseado em GPOM, um novo modelo de verossimilhança baseado em função densidade probabilidade de distribuição multi-normal é proposto. Experimentos foram realizados em ambientes urbanos reais. Mapas do ambiente foram gerados a partir de dados de laser esparsos de forma a verificar a reconstrução de áreas não observadas. O sistema de localização foi avaliado por meio da comparação das posições estimadas comum GPS de alta precisão. Comparou-se também o MCL baseado em OGM com o MCL baseado em GPOM, de forma a verificar qual abordagem apresenta melhores resultados.
|
175 |
Identifying exoplanets and unmasking false positives with NGTSGünther, Maximilian Norbert January 2018 (has links)
In my PhD, I advanced the scientific exploration of the Next Generation Transit Survey (NGTS), a ground-based wide-field survey operating at ESO’s Paranal Observatory in Chile since 2016. My original contribution to knowledge is the development of novel methods to 1) estimate NGTS’ yield of planets and false positives; 2) disentangle planets from false positives; and 3) accurately characterise planets. If an exoplanet passes (transits) in front of its host star, we can measure a periodic decrease in brightness. The study of transiting exoplanets gives insight into their size, formation, bulk composition and atmospheric properties. Transit surveys are limited by their ability to identify false positives, which can mimic planets and out-number them by a hundredfold. First, I designed a novel yield simulator to optimise NGTS’ observing strategy and identification of false positives (published in Günther et al., 2017a). This showed that NGTS’ prime targets, Neptune- and Earth-sized signals, are frequently mimicked by blended eclipsing binaries, allowing me to quantify and prepare strategies for candidate vetting and follow-up. Second, I developed a centroiding algorithm for NGTS, achieving a precision of 0.25 milli-pixel in a CCD image (published in Günther et al., 2017b). With this, one can measure a shift of light during an eclipse, readily identifying unresolved blended objects. Third, I innovated a joint Bayesian fitting framework for photometry, centroids, and radial velocity cross-correlation function profiles. This allows to disentangle which object (target or blend) is causing the signal and to characterise the system. My method has already unmasked numerous false positives. Most importantly, I confirmed that a signal which was almost erroneously rejected, is in fact an exoplanet (published in Günther et al., 2018). The presented achievements minimise the contamination with blended false positives in NGTS candidates by 80%, and show a new approach for unmasking hidden exoplanets. This research enhanced the success of NGTS, and can provide guidance for future missions.
|
176 |
Computer experiments: design, modeling and integrationQian, Zhiguang 19 May 2006 (has links)
The use of computer modeling is fast increasing in almost every
scientific, engineering and business arena. This dissertation
investigates some challenging issues in design, modeling and
analysis of computer experiments, which will consist of four major
parts. In the first part, a new approach is developed to combine
data from approximate and detailed simulations to build a
surrogate model based on some stochastic models. In the second
part, we propose some Bayesian hierarchical Gaussian process
models to integrate data from different types of experiments. The
third part concerns the development of latent variable models for
computer experiments with multivariate response with application
to data center temperature modeling. The last chapter is devoted
to the development of nested space-filling designs for multiple
experiments with different levels of accuracy.
|
177 |
Characterization and construction of max-stable processesStrokorb, Kirstin 02 July 2013 (has links)
No description available.
|
178 |
Fast uncertainty reduction strategies relying on Gaussian process modelsChevalier, Clément 18 September 2013 (has links) (PDF)
Cette thèse traite de stratégies d'évaluation séquentielle et batch-séquentielle de fonctions à valeurs réelles sous un budget d'évaluation limité, à l'aide de modèles à processus Gaussiens. Des stratégies optimales de réduction séquentielle d'incertitude (SUR) sont étudiées pour deux problèmes différents, motivés par des cas d'application en sûreté nucléaire. Tout d'abord, nous traitons le problème d'identification d'un ensemble d'excursion au dessus d'un seuil T d'une fonction f à valeurs réelles. Ensuite, nous étudions le problème d'identification de l'ensemble des configurations "robustes, contrôlées", c'est à dire l'ensemble des inputs contrôlés où la fonction demeure sous T quelle que soit la valeur des différents inputs non-contrôlés. De nouvelles stratégies SUR sont présentés. Nous donnons aussi des procédures efficientes et des formules permettant d'utiliser ces stratégies sur des applications concrètes. L'utilisation de formules rapides pour recalculer rapidement le posterior de la moyenne ou de la fonction de covariance d'un processus Gaussien (les "formules d'update de krigeage") ne fournit pas uniquement une économie computationnelle importante. Elles sont aussi l'un des ingrédient clé pour obtenir des formules fermées permettant l'utilisation en pratique de stratégies d'évaluation coûteuses en temps de calcul. Une contribution en optimisation batch-séquentielle utilisant le Multi-points Expected Improvement est également présentée.
|
179 |
Méthodes mathématiques et numériques pour la modélisation des déformations et l'analyse de texture. Applications en imagerie médicale / Mathematical and numerical methods for the modeling of deformations and image texture analysis. Applications in medical imagingChesseboeuf, Clément 23 November 2017 (has links)
Nous décrivons une procédure numérique pour le recalage d'IRM cérébrales 3D. Le problème d'appariement est abordé à travers la distinction usuelle entre le modèle de déformation et le critère d'appariement. Le modèle de déformation est celui de l'anatomie computationnelle, fondé sur un groupe de difféomorphismes engendrés en intégrant des champs de vecteurs. Le décalage entre les images est évalué en comparant les lignes de niveau de ces images, représentées par un courant différentiel dans le dual d'un espace de champs de vecteurs. Le critère d'appariement obtenu est non local et rapide à calculer. On se place dans l'ensemble des difféomorphismes pour rechercher une déformation reliant les deux images. Pour cela, on minimise le critère en suivant le principe de l'algorithme sous-optimal. L'efficacité de l'algorithme est renforcée par une description eulérienne et périodique du mouvement. L'algorithme est appliqué pour le recalage d'images IRM cérébrale 3d, la procédure numérique menant à ces résultats est intégralement décrite. Nos travaux concernent aussi l'analyse des propriétés de l'algorithme. Pour cela, nous avons simplifié l'équation représentant l'évolution de l'image et étudié l'équation simplifiée en utilisant la théorie des solutions de viscosité. Nous étudions aussi le problème de détection de rupture dans la variance d'un signal aléatoire gaussien. La spécificité de notre modèle vient du cadre infill, ce qui signifie que la distribution des données dépend de la taille de l'échantillon. L'estimateur de l'instant de rupture est défini comme le point maximisant une fonction de contraste. Nous étudions la convergence de cette fonction et ensuite la convergence de l'estimateur associé. L'application la plus directe concerne l'estimation de changement dans le paramètre de Hurst d'un mouvement brownien fractionnaire. L'estimateur dépend d'un paramètre p > 0 et nos résultats montrent qu'il peut être intéressant de choisir p < 2. / We present a numerical procedure for the matching of 3D MRI. The problem of image matching is addressed through the usual distinction between the deformation model and the matching criterion. The deformation model is based on the theory of computational anatomy and the set of deformations is a group of diffeomorphisms generated by integrating vector fields. The discrepancy between the two images is evaluated through comparisons of level lines represented by a differential current in the dual of a space of vector fields. This representation leads to a quickly computable non-local criterion. Then, the optimisation method is based on the minimization of the criterion following the idea of the so-called sub-optimal algorithm. We take advantage of the eulerian and periodical description of the algorithm to get an efficient numerical procedure. This algorithm can be used to deal with 3d MR images and numerical experiences are presented. In an other part, we focus on theoretical properties of the algorithm. We begin by simplifying the equation representing the evolution of the deformed image and we use the theory of viscosity solutions to study the simplified equation. The second issue we are interested in is the change-point estimation for a gaussian sequence with change in the variance parameter. The main feature of our model is that we work with infill data and the nature of the data can evolve jointly with the size of the sample. The usual approach suggests to introduce a contrast function and using the point of its maximum as a change-point estimator. We first get an information about the asymptotic fluctuations of the contrast function around its mean function. Then, we focus on the change-point estimator and more precisely on the convergence of this estimator. The most direct application concerns the detection of change in the Hurst parameter of a fractional brownian motion. The estimator depends on a parameter p > 0, generalizing the usual choice p = 2. We present some results illustrating the advantage of a parameter p < 2.
|
180 |
A gestão da estratégia mercadologica sob uma nova perspectiva: existe relação entre a física e a administração?Mendes, Armando Praça January 2004 (has links)
Made available in DSpace on 2009-11-18T19:01:13Z (GMT). No. of bitstreams: 0
Previous issue date: 2004 / A Física e a Administração concentram suas pesquisas sobre fenômenos que, de certa forma, se assemelham, fazendo com que nos questionemos a respeito da grande integral do universo a que estamos submetidos. Em uma exploração por analogias, aproxima-se aqui o mundo organizacional ao dos sistemas UnIVerSaIS, instáveis e não-integráveis, onde a flecha do tempo é quem determina a evolução dos mesmos. Mostra-se que na Administração, como na Física, tudo parece convergir na direção de um inesgotável repertório de bifurcações e possibilidades para o destino mercadológico de produtos, serviços e marcas ao longo de um continuum. Para amenizar os efeitos dessas incertezas, é buscada uma simplificação desses complexos sistemas sociais através de uma proposta de modelo baseado em fatores consagrados pela literatura da gestão empresarial como norteadores das escolhas dos consumidores; um processo gaussiano da 'percepção do valor', que pode servir de ferramenta nas decisões estratégicas e gerenciais dentro das empresas. / The physical and the administration sciences focus their researches on phenomenum wich, in some ways, can have similarities, making us to question and ask about the great convergence ofthe systems in the Universe under which we are submitted. Exploring by analogues, this research tries to make sense to put together the organizational and physical systems, unstables and not integratable, moving forward by the time's arrow, that determines the evolution ofthose. In the Administration, as in the Physics, everything seems to converge at the direction of an inexhaustible collection of forks and possibilities, if considering the destiny of products, services and labels during the human history. To soften the effects of those uncertanties, it is fetched a simplification of these complex social systems across a proposal of a model to be constructed and tested, based in some factors established by business management's literature as the guiders of the consumers's choices; a gaussian process of the 'insight value', that can be useful as a tool for the strategic and business managing decisions beyond the companies.
|
Page generated in 0.0884 seconds