• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 131
  • 23
  • 16
  • 8
  • 1
  • Tagged with
  • 250
  • 250
  • 64
  • 58
  • 53
  • 37
  • 37
  • 36
  • 34
  • 29
  • 28
  • 27
  • 25
  • 25
  • 25
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
141

[en] BRANCHING PROCESSES FOR EPIDEMICS STUDY / [pt] PROCESSOS DE RAMIFICAÇÃO PARA O ESTUDO DE EPIDEMIAS

JOAO PEDRO XAVIER FREITAS 26 October 2023 (has links)
[pt] Este trabalho modela a evolução temporal de uma epidemia com uma abordagem estocástica. O número de novas infecções por infectado é modelado como uma variável aleatória discreta, chamada aqui de contágio. Logo, a evolução temporal da doença é um processo estocástico. Mais especificamente, a propagação é dada pelo modelo de Bienaymé-Galton-Watson, um tipo de processo de ramificação de parâmetro discreto. Neste processo, para um determinado instante, o número de membros infectados, ou seja, a geração de membros infectados é uma variável aleatória. Na primeira parte da dissertação, dado que o modelo probabilístico do contágio é conhecido, quatro metodologias utilizadas para obter as funções de massa das gerações do processo estocástico são comparadas. As metodologias são: funções geradoras de probabilidade com e sem identidades polinomiais, cadeia de Markov e simulações de Monte Carlo. A primeira e terceira metodologias fornecem expressões analíticas relacionando a variável aleatória de contágio com a variável aleatória do tamanho de uma geração. Essas expressões analíticas são utilizadas na segunda parte desta dissertação, na qual o problema clássico de inferência paramétrica bayesiana é estudado. Com a ajuda do teorema de Bayes, parâmetros da variável aleatória de contágio são inferidos a partir de realizações do processo de ramificação. As expressões analíticas obtidas na primeira parte do trabalho são usadas para construir funções de verossimilhança apropriadas. Para resolver o problema inverso, duas maneiras diferentes de se usar dados provindos do processo de Bienaymé-Galton-Watson são desenvolvidas e comparadas: quando dados são realizações de uma única geração do processo de ramificação ou quando os dados são uma única realização do processo de ramificação observada ao longo de uma quantidade de gerações. O critério abordado neste trabalho para encerrar o processo de atualização na inferência paramétrica usa a distância de L2-Wasserstein, que é uma métrica baseada no transporte ótimo de massa. Todas as rotinas numéricas e simbólicas desenvolvidas neste trabalho são escritas em MATLAB. / [en] This work models an epidemic s spreading over time with a stochastic approach. The number of infections per infector is modeled as a discrete random variable, named here as contagion. Therefore, the evolution of the disease over time is a stochastic process. More specifically, this propagation is modeled as the Bienaymé-Galton-Watson process, one kind of branching process with discrete parameter. In this process, for a given time, the number of infected members, i.e. a generation of infected members, is a random variable. In the first part of this dissertation, given that the mass function of the contagion s random variable is known, four methodologies to find the mass function of the generations of the stochastic process are compared. The methodologies are: probability generating functions with and without polynomial identities, Markov chain and Monte Carlo simulations. The first and the third methodologies provide analytical expressions relating the contagion random variable and the generation s size random variable. These analytical expressions are used in the second part of this dissertation, where a classical inverse problem of bayesian parametric inference is studied. With the help of Bayes rule, parameters of the contagion random variable are inferred from realizations of the stochastic process. The analytical expressions obtained in the first part of the work are used to build appropriate likelihood functions. In order to solve the inverse problem, two different ways of using data from the Bienaymé-Galton-Watson process are developed and compared: when data are realizations of a single generation of the branching process and when data is just one realization of the branching process observed over a certain number of generations. The criteria used in this work to stop the update process in the bayesian parametric inference uses the L2-Wasserstein distance, which is a metric based on optimal mass transference. All numerical and symbolical routines developed to this work are written in MATLAB.
142

Computational and Machine Learning-Reinforced Modeling and Design of Materials under Uncertainty

Hasan, Md Mahmudul 05 July 2023 (has links)
The component-level performance of materials is fundamentally determined by the underlying microstructural features. Therefore, designing high-performance materials using multi-scale models plays a significant role to improve the predictability, reliability, proper functioning, and longevity of components for a wide range of applications in the fields of aerospace, electronics, energy, and structural engineering. This thesis aims to develop new methodologies to design microstructures under inherent material uncertainty by incorporating machine learning techniques. To achieve this objective, the study addresses gradient-based and machine learning-driven design optimization methods to enhance homogenized linear and non-linear properties of polycrystalline microstructures. However, variations arising from the thermo-mechanical processing of materials affect microstructural features and properties by propagating over multiple length scales. To quantify this inherent microstructural uncertainty, this study introduces a linear programming-based analytical method. When this analytical uncertainty quantification formulation is not applicable (e.g., uncertainty propagation on non-linear properties), a machine learning-based inverse design approach is presented to quantify the microstructural uncertainty. Example design problems are discussed for different polycrystalline systems (e.g., Titanium, Aluminium, and Galfenol). Though conventional machine learning performs well when used for designing microstructures or modeling material properties, its predictions may still fail to satisfy design constraints associated with the physics of the system. Therefore, the physics-informed neural network (PINN) is developed to incorporate problem physics in the machine learning formulation. In this study, a PINN model is built and integrated into materials design to study the deformation processes of Copper and a Titanium-Aluminum alloy. / Doctor of Philosophy / Microstructure-sensitive design is a high-throughput computational approach for materials design, where material performance is improved through the control and design of microstructures. It enhances component performance and, subsequently, the overall system's performance at the application level. This thesis aims to design microstructures for polycrystalline materials such as Galfenol, Titanium-Aluminum alloys, and Copper to obtain desired mechanical properties for certain applications. The advantage of the microstructure-sensitive design approach is that multiple microstructures can be suggested, which provide a similar value of the design parameters. Therefore, manufacturers can follow any of these microstructure designs to fabricate the materials with the desired properties. Moreover, the microstructure uncertainty arising from the variations in thermo-mechanical processing and measurement of the experimental data is quantified. It is necessary to address the resultant randomness of the microstructure because it can alter the expected mechanical properties. To check the manufacturability of proposed microstructure designs, a physics-informed machine learning model is developed to build a relation between the process, microstructure, and material properties. This model can be used to solve the process design problem to identify the processing parameters to achieve a given/desired microstructure.
143

DIMENSION REDUCTION, OPERATOR LEARNING AND UNCERTAINTY QUANTIFICATION FOR PROBLEMS OF DIFFERENTIAL EQUATIONS

Shiqi Zhang (12872678) 26 July 2022 (has links)
<p>In this work, we mainly focus on the topic related to dimension reduction, operator learning and uncertainty quantification for problems of differential equations. The supervised machine learning methods introduced here belong to a newly booming field compared to traditional numerical methods. The building blocks for our works are mainly Gaussian process and neural network. </p> <p><br></p> <p>The first work focuses on supervised dimension reduction problems. A new framework based on rotated multi-fidelity Gaussian process regression is introduced. It can effectively solve high-dimensional problems while the data are insufficient for traditional methods. Moreover, an accurate surrogate Gaussian process model of original problem can be formulated. The second one we would like to introduce is a physics-assisted Gaussian process framework with active learning for forward and inverse problems of partial differential equations(PDEs). In this work, Gaussian process regression model is incorporated with given physical information to find solutions or discover unknown coefficients of given PDEs. Three different models are introduce and their performance are compared and discussed. Lastly, we propose attention based MultiAuto-DeepONet for operator learning of stochastic problems. The target of this work is to solve operator learning problems related to time-dependent stochastic differential equations(SDEs). The work is built on MultiAuto-DeepONet and attention mechanism is applied to improve the model performance in specific type of problems. Three different types of attention mechanism are presented and compared. Numerical experiments are provided to illustrate the effectiveness of our proposed models.</p>
144

Multidisciplinary Design Under Uncertainty Framework of a Spacecraft and Trajectory for an Interplanetary Mission

Siddhesh Ajay Naidu (18437880) 28 April 2024 (has links)
<p dir="ltr">Design under uncertainty (DUU) for spacecraft is crucial in ensuring mission success, especially given the criticality of their failure. To obtain a more realistic understanding of space systems, it is beneficial to holistically couple the modeling of the spacecraft and its trajectory as a multidisciplinary analysis (MDA). In this work, a MDA model is developed for an Earth-Mars mission by employing the general mission analysis tool (GMAT) to model the mission trajectory and rocket propulsion analysis (RPA) to design the engines. By utilizing this direct MDA model, the deterministic optimization (DO) of the system is performed first and yields a design that completed the mission in 307 days while requiring 475 kg of fuel. The direct MDA model is also integrated into a Monte Carlo simulation (MCS) to investigate the uncertainty quantification (UQ) of the spacecraft and trajectory system. When considering the combined uncertainty in the launch date for a 20-day window and the specific impulses, the time of flight ranges from 275 to 330 days and the total fuel consumption ranges from 475 to 950 kg. The spacecraft velocity exhibits deviations ranging from 2 to 4 km/s at any given instance in the Earth inertial frame. The amount of fuel consumed during the TCM ranges from 1 to 250 kg, while during the MOI, the amount of fuel consumed ranges from 350 to 810 kg. The usage of the direct MDA model for optimization and uncertainty quantification of the system can be computationally prohibitive for DUU. To address this challenge, the effectiveness of utilizing surrogate-based approaches for performing UQ is demonstrated, resulting in significantly lower computational costs. Gaussian processes (GP) models trained on data from the MDA model were implemented into the UQ framework and their results were compared to those of the direct MDA method. When considering the combined uncertainty from both sources, the surrogate-based method had a mean error of 1.67% and required only 29% of the computational time. When compared to the direct MDA, the time of flight range matched well. While the TCM and MOI fuel consumption ranges were smaller by 5 kg. These GP models were integrated into the DUU framework to perform reliability-based design optimization (RBDO) feasibly for the spacecraft and trajectory system. For the combined uncertainty, the DO design yielded a poor reliability of 54%, underscoring the necessity for performing RBDO. The DUU framework obtained a design with a significantly improved reliability of 99%, which required an additional 39.19 kg of fuel and also resulted in a reduced time of flight by 0.55 days.</p>
145

Modeling Continental-Scale Outdoor Environmental Sound Levels with Limited Data

Pedersen, Katrina Lynn 13 August 2021 (has links) (PDF)
Modeling outdoor acoustic environments is a challenging problem because outdoor acoustic environments are the combination of diverse sources and propagation effects, including barriers to propagation such as buildings or vegetation. Outdoor acoustic environments are most commonly modeled on small geographic scales (e.g., within a single city). Extending modeling efforts to continental scales is particularly challenging due to an increase in the variety of geographic environments. Furthermore, acoustic data on which to train and validate models are expensive to collect and therefore relatively limited. It is unclear how models trained on this limited acoustic data will perform across continental-scales, which likely contain unique geographic regions which are not represented in the training data. In this dissertation, we consider the problem of continental-scale outdoor environmental sound level modeling using the contiguous United States for our area of study. We use supervised machine learning methods to produce models of various acoustic metrics and unsupervised learning methods to study the natural structures in geospatial data. We present a validation study of two continental-scale models which demonstrates that there is a need for better uncertainty quantification and tools to guide data collection. Using ensemble models, we investigate methods for quantifying uncertainty in continental-scale models. We also study methods of improving model accuracy, including dimensionality reduction, and explore the feasibility of predicting hourly spectral levels.
146

Robust State Estimation, Uncertainty Quantification, and Uncertainty Reduction with Applications to Wind Estimation

Gahan, Kenneth Christopher 17 July 2024 (has links)
Indirect wind estimation onboard unmanned aerial systems (UASs) can be accomplished using existing air vehicle sensors along with a dynamic model of the UAS augmented with additional wind-related states. It is often desired to extract a mean component of the wind the from frequency fluctuations (i.e. turbulence). Commonly, a variation of the KALMAN filter is used, with explicit or implicit assumptions about the nature of the random wind velocity. This dissertation presents an H-infinity (H∞) filtering approach to wind estimation which requires no assumptions about the statistics of the process or measurement noise. To specify the wind frequency content of interest a low-pass filter is incorporated. We develop the augmented UAS model in continuous-time, derive the H∞ filter, and introduce a KALMAN-BUCY filter for comparison. The filters are applied to data gathered during UAS flight tests and validated using a vaned air data unit onboard the aircraft. The H∞ filter provides quantitatively better estimates of the wind than the KALMAN-BUCY filter, with approximately 10-40% less root-mean-square (RMS) error in the majority of cases. It is also shown that incorporating DRYDEN turbulence does not improve the KALMAN-BUCY results. Additionally, this dissertation describes the theory and process for using generalized polynomial chaos (gPC) to re-cast the dynamics of a system with non-deterministic parameters as a deterministic system. The concepts are applied to the problem of wind estimation and characterizing the precision of wind estimates over time due to known parametric uncertainties. A novel truncation method, known as Sensitivity-Informed Variable Reduction (SIVR) was developed. In the multivariate case presented here, gPC and the SIVR-derived reduced gPC (gPCr) exhibit a computational advantage over Monte Carlo sampling-based methods for uncertainty quantification (UQ) and sensitivity analysis (SA), with time reductions of 38% and 98%, respectively. Lastly, while many estimation approaches achieve desirable accuracy under the assumption of known system parameters, reducing the effect of parametric uncertainty on wind estimate precision is desirable and has not been thoroughly investigated. This dissertation describes the theory and process for combining gPC and H-infinity (H∞) filtering. In the multivariate case presented, the gPC H∞ filter shows superiority over a nominal H∞ filter in terms of variance in estimates due to model parametric uncertainty. The error due to parametric uncertainty, as characterized by the variance in estimates from the mean, is reduced by as much as 63%. / Doctor of Philosophy / On unmanned aerial systems (UASs), determining wind conditions indirectly, without direct measurements, is possible by utilizing onboard sensors and computational models. Often, the goal is to isolate the average wind speed while ignoring turbulent fluctuations. Conventionally, this is achieved using a mathematical tool called the KALMAN filter, which relies on assumptions about the wind. This dissertation introduces a novel approach called H-infinity (H∞) filtering, which does not rely on such assumptions and includes an additional mechanism to focus on specific wind frequencies of interest. The effectiveness of this method is evaluated using real-world data from UAS flights, comparing it with the traditional KALMAN-BUCY filter. Results show that the H∞ filter provides significantly improved wind estimates, with approximately 10-40% less error in most cases. Furthermore, the dissertation addresses the challenge of dealing with uncertainty in wind estimation. It introduces another mathematical technique called generalized polynomial chaos (gPC), which is used to quantify and manage uncertainties within the UAS system and their impact on the indirect wind estimates. By applying gPC, the dissertation shows that the amount and sources of uncertainty can be determined more efficiently than by traditional methods (up to 98% faster). Lastly, this dissertation shows the use of gPC to provide more precise wind estimates. In experimental scenarios, employing gPC in conjunction with H∞ filtering demonstrates superior performance compared to using a standard H∞ filter alone, reducing errors caused by uncertainty by as much as 63%.
147

Computational methods for random differential equations: probability density function and estimation of the parameters

Calatayud Gregori, Julia 05 March 2020 (has links)
[EN] Mathematical models based on deterministic differential equations do not take into account the inherent uncertainty of the physical phenomenon (in a wide sense) under study. In addition, inaccuracies in the collected data often arise due to errors in the measurements. It thus becomes necessary to treat the input parameters of the model as random quantities, in the form of random variables or stochastic processes. This gives rise to the study of random ordinary and partial differential equations. The computation of the probability density function of the stochastic solution is important for uncertainty quantification of the model output. Although such computation is a difficult objective in general, certain stochastic expansions for the model coefficients allow faithful representations for the stochastic solution, which permits approximating its density function. In this regard, Karhunen-Loève and generalized polynomial chaos expansions become powerful tools for the density approximation. Also, methods based on discretizations from finite difference numerical schemes permit approximating the stochastic solution, therefore its probability density function. The main part of this dissertation aims at approximating the probability density function of important mathematical models with uncertainties in their formulation. Specifically, in this thesis we study, in the stochastic sense, the following models that arise in different scientific areas: in Physics, the model for the damped pendulum; in Biology and Epidemiology, the models for logistic growth and Bertalanffy, as well as epidemiological models; and in Thermodynamics, the heat partial differential equation. We rely on Karhunen-Loève and generalized polynomial chaos expansions and on finite difference schemes for the density approximation of the solution. These techniques are only applicable when we have a forward model in which the input parameters have certain probability distributions already set. When the model coefficients are estimated from collected data, we have an inverse problem. The Bayesian inference approach allows estimating the probability distribution of the model parameters from their prior probability distribution and the likelihood of the data. Uncertainty quantification for the model output is then carried out using the posterior predictive distribution. In this regard, the last part of the thesis shows the estimation of the distributions of the model parameters from experimental data on bacteria growth. To do so, a hybrid method that combines Bayesian parameter estimation and generalized polynomial chaos expansions is used. / [ES] Los modelos matemáticos basados en ecuaciones diferenciales deterministas no tienen en cuenta la incertidumbre inherente del fenómeno físico (en un sentido amplio) bajo estudio. Además, a menudo se producen inexactitudes en los datos recopilados debido a errores en las mediciones. Por lo tanto, es necesario tratar los parámetros de entrada del modelo como cantidades aleatorias, en forma de variables aleatorias o procesos estocásticos. Esto da lugar al estudio de las ecuaciones diferenciales aleatorias. El cálculo de la función de densidad de probabilidad de la solución estocástica es importante en la cuantificación de la incertidumbre de la respuesta del modelo. Aunque dicho cálculo es un objetivo difícil en general, ciertas expansiones estocásticas para los coeficientes del modelo dan lugar a representaciones fieles de la solución estocástica, lo que permite aproximar su función de densidad. En este sentido, las expansiones de Karhunen-Loève y de caos polinomial generalizado constituyen herramientas para dicha aproximación de la densidad. Además, los métodos basados en discretizaciones de esquemas numéricos de diferencias finitas permiten aproximar la solución estocástica, por lo tanto, su función de densidad de probabilidad. La parte principal de esta disertación tiene como objetivo aproximar la función de densidad de probabilidad de modelos matemáticos importantes con incertidumbre en su formulación. Concretamente, en esta memoria se estudian, en un sentido estocástico, los siguientes modelos que aparecen en diferentes áreas científicas: en Física, el modelo del péndulo amortiguado; en Biología y Epidemiología, los modelos de crecimiento logístico y de Bertalanffy, así como modelos de tipo epidemiológico; y en Termodinámica, la ecuación en derivadas parciales del calor. Utilizamos expansiones de Karhunen-Loève y de caos polinomial generalizado y esquemas de diferencias finitas para la aproximación de la densidad de la solución. Estas técnicas solo son aplicables cuando tenemos un modelo directo en el que los parámetros de entrada ya tienen determinadas distribuciones de probabilidad establecidas. Cuando los coeficientes del modelo se estiman a partir de los datos recopilados, tenemos un problema inverso. El enfoque de inferencia Bayesiana permite estimar la distribución de probabilidad de los parámetros del modelo a partir de su distribución de probabilidad previa y la verosimilitud de los datos. La cuantificación de la incertidumbre para la respuesta del modelo se lleva a cabo utilizando la distribución predictiva a posteriori. En este sentido, la última parte de la tesis muestra la estimación de las distribuciones de los parámetros del modelo a partir de datos experimentales sobre el crecimiento de bacterias. Para hacerlo, se utiliza un método híbrido que combina la estimación de parámetros Bayesianos y los desarrollos de caos polinomial generalizado. / [CA] Els models matemàtics basats en equacions diferencials deterministes no tenen en compte la incertesa inherent al fenomen físic (en un sentit ampli) sota estudi. A més a més, sovint es produeixen inexactituds en les dades recollides a causa d'errors de mesurament. Es fa així necessari tractar els paràmetres d'entrada del model com a quantitats aleatòries, en forma de variables aleatòries o processos estocàstics. Açò dóna lloc a l'estudi de les equacions diferencials aleatòries. El càlcul de la funció de densitat de probabilitat de la solució estocàstica és important per a quantificar la incertesa de la sortida del model. Tot i que, en general, aquest càlcul és un objectiu difícil d'assolir, certes expansions estocàstiques dels coeficients del model donen lloc a representacions fidels de la solució estocàstica, el que permet aproximar la seua funció de densitat. En aquest sentit, les expansions de Karhunen-Loève i de caos polinomial generalitzat esdevenen eines per a l'esmentada aproximació de la densitat. A més a més, els mètodes basats en discretitzacions mitjançant esquemes numèrics de diferències finites permeten aproximar la solució estocàstica, per tant la seua funció de densitat de probabilitat. La part principal d'aquesta dissertació té com a objectiu aproximar la funció de densitat de probabilitat d'importants models matemàtics amb incerteses en la seua formulació. Concretament, en aquesta memòria s'estudien, en un sentit estocàstic, els següents models que apareixen en diferents àrees científiques: en Física, el model del pèndol amortit; en Biologia i Epidemiologia, els models de creixement logístic i de Bertalanffy, així com models de tipus epidemiològic; i en Termodinàmica, l'equació en derivades parcials de la calor. Per a l'aproximació de la densitat de la solució, ens basem en expansions de Karhunen-Loève i de caos polinomial generalitzat i en esquemes de diferències finites. Aquestes tècniques només són aplicables quan tenim un model cap avant en què els paràmetres d'entrada tenen ja determinades distribucions de probabilitat. Quan els coeficients del model s'estimen a partir de les dades recollides, tenim un problema invers. L'enfocament de la inferència Bayesiana permet estimar la distribució de probabilitat dels paràmetres del model a partir de la seua distribució de probabilitat prèvia i la versemblança de les dades. La quantificació de la incertesa per a la resposta del model es fa mitjançant la distribució predictiva a posteriori. En aquest sentit, l'última part de la tesi mostra l'estimació de les distribucions dels paràmetres del model a partir de dades experimentals sobre el creixement de bacteris. Per a fer-ho, s'utilitza un mètode híbrid que combina l'estimació de paràmetres Bayesiana i els desenvolupaments de caos polinomial generalitzat. / This work has been supported by the Spanish Ministerio de Economía y Competitividad grant MTM2017–89664–P. / Calatayud Gregori, J. (2020). Computational methods for random differential equations: probability density function and estimation of the parameters [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/138396 / Premios Extraordinarios de tesis doctorales
148

COMPUTATIONAL METHODS FOR RANDOM DIFFERENTIAL EQUATIONS: THEORY AND APPLICATIONS

Navarro Quiles, Ana 01 March 2018 (has links)
Desde las contribuciones de Isaac Newton, Gottfried Wilhelm Leibniz, Jacob y Johann Bernoulli en el siglo XVII hasta ahora, las ecuaciones en diferencias y las diferenciales han demostrado su capacidad para modelar satisfactoriamente problemas complejos de gran interés en Ingeniería, Física, Epidemiología, etc. Pero, desde un punto de vista práctico, los parámetros o inputs (condiciones iniciales/frontera, término fuente y/o coeficientes), que aparecen en dichos problemas, son fijados a partir de ciertos datos, los cuales pueden contener un error de medida. Además, pueden existir factores externos que afecten al sistema objeto de estudio, de modo que su complejidad haga que no se conozcan de forma cierta los parámetros de la ecuación que modeliza el problema. Todo ello justifica considerar los parámetros de la ecuación en diferencias o de la ecuación diferencial como variables aleatorias o procesos estocásticos, y no como constantes o funciones deterministas, respectivamente. Bajo esta consideración aparecen las ecuaciones en diferencias y las ecuaciones diferenciales aleatorias. Esta tesis hace un recorrido resolviendo, desde un punto de vista probabilístico, distintos tipos de ecuaciones en diferencias y diferenciales aleatorias, aplicando fundamentalmente el método de Transformación de Variables Aleatorias. Esta técnica es una herramienta útil para la obtención de la función de densidad de probabilidad de un vector aleatorio, que es una transformación de otro vector aleatorio cuya función de densidad de probabilidad es conocida. En definitiva, el objetivo de este trabajo es el cálculo de la primera función de densidad de probabilidad del proceso estocástico solución en diversos problemas basados en ecuaciones en diferencias y diferenciales aleatorias. El interés por determinar la primera función de densidad de probabilidad se justifica porque dicha función determinista caracteriza la información probabilística unidimensional, como media, varianza, asimetría, curtosis, etc., de la solución de la ecuación en diferencias o diferencial correspondiente. También permite determinar la probabilidad de que acontezca un determinado suceso de interés que involucre a la solución. Además, en algunos casos, el estudio teórico realizado se completa mostrando su aplicación a problemas de modelización con datos reales, donde se aborda el problema de la estimación de distribuciones estadísticas paramétricas de los inputs en el contexto de las ecuaciones en diferencias y diferenciales aleatorias. / Ever since the early contributions by Isaac Newton, Gottfried Wilhelm Leibniz, Jacob and Johann Bernoulli in the XVII century until now, difference and differential equations have uninterruptedly demonstrated their capability to model successfully interesting complex problems in Engineering, Physics, Chemistry, Epidemiology, Economics, etc. But, from a practical standpoint, the application of difference or differential equations requires setting their inputs (coefficients, source term, initial and boundary conditions) using sampled data, thus containing uncertainty stemming from measurement errors. In addition, there are some random external factors which can affect to the system under study. Then, it is more advisable to consider input data as random variables or stochastic processes rather than deterministic constants or functions, respectively. Under this consideration random difference and differential equations appear. This thesis makes a trail by solving, from a probabilistic point of view, different types of random difference and differential equations, applying fundamentally the Random Variable Transformation method. This technique is an useful tool to obtain the probability density function of a random vector that results from mapping another random vector whose probability density function is known. Definitely, the goal of this dissertation is the computation of the first probability density function of the solution stochastic process in different problems, which are based on random difference or differential equations. The interest in determining the first probability density function is justified because this deterministic function characterizes the one-dimensional probabilistic information, as mean, variance, asymmetry, kurtosis, etc. of corresponding solution of a random difference or differential equation. It also allows to determine the probability of a certain event of interest that involves the solution. In addition, in some cases, the theoretical study carried out is completed, showing its application to modelling problems with real data, where the problem of parametric statistics distribution estimation is addressed in the context of random difference and differential equations. / Des de les contribucions de Isaac Newton, Gottfried Wilhelm Leibniz, Jacob i Johann Bernoulli al segle XVII fins a l'actualitat, les equacions en diferències i les diferencials han demostrat la seua capacitat per a modelar satisfactòriament problemes complexos de gran interés en Enginyeria, Física, Epidemiologia, etc. Però, des d'un punt de vista pràctic, els paràmetres o inputs (condicions inicials/frontera, terme font i/o coeficients), que apareixen en aquests problemes, són fixats a partir de certes dades, les quals poden contenir errors de mesura. A més, poden existir factors externs que afecten el sistema objecte d'estudi, de manera que, la seua complexitat faça que no es conega de forma certa els inputs de l'equació que modelitza el problema. Tot aço justifica la necessitat de considerar els paràmetres de l'equació en diferències o de la equació diferencial com a variables aleatòries o processos estocàstics, i no com constants o funcions deterministes. Sota aquesta consideració apareixen les equacions en diferències i les equacions diferencials aleatòries. Aquesta tesi fa un recorregut resolent, des d'un punt de vista probabilístic, diferents tipus d'equacions en diferències i diferencials aleatòries, aplicant fonamentalment el mètode de Transformació de Variables Aleatòries. Aquesta tècnica és una eina útil per a l'obtenció de la funció de densitat de probabilitat d'un vector aleatori, que és una transformació d'un altre vector aleatori i la funció de densitat de probabilitat és del qual és coneguda. En definitiva, l'objectiu d'aquesta tesi és el càlcul de la primera funció de densitat de probabilitat del procés estocàstic solució en diversos problemes basats en equacions en diferències i diferencials. L'interés per determinar la primera funció de densitat es justifica perquè aquesta funció determinista caracteritza la informació probabilística unidimensional, com la mitjana, variància, asimetria, curtosis, etc., de la solució de l'equació en diferències o l'equació diferencial aleatòria corresponent. També permet determinar la probabilitat que esdevinga un determinat succés d'interés que involucre la solució. A més, en alguns casos, l'estudi teòric realitzat es completa mostrant la seua aplicació a problemes de modelització amb dades reals, on s'aborda el problema de l'estimació de distribucions estadístiques paramètriques dels inputs en el context de les equacions en diferències i diferencials aleatòries. / Navarro Quiles, A. (2018). COMPUTATIONAL METHODS FOR RANDOM DIFFERENTIAL EQUATIONS: THEORY AND APPLICATIONS [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/98703
149

Modeling and Experimental Validation of Mission-Specific Prognosis of Li-Ion Batteries with Hybrid Physics-Informed Neural Networks

Fricke, Kajetan 01 January 2023 (has links) (PDF)
While the second part of the 20th century was dominated by combustion engine powered vehicles, climate change and limited oil resources has been forcing car manufacturers and other companies in the mobility sector to switch to renewable energy sources. Electric engines supplied by Li-ion battery cells are on the forefront of this revolution in the mobility sector. A challenging but very important task hereby is the precise forecasting of the degradation of battery state-of-health and state-of-charge. Hence, there is a high demand in models that can predict the SOH and SOC and consider the specifics of a certain kind of battery cell and the usage profile of the battery. While traditional physics-based and data-driven approaches are used to monitor the SOH and SOC, they both have limitations related to computational costs or that require engineers to continually update their prediction models as new battery cells are developed and put into use in battery-powered vehicle fleets. In this dissertation, we enhance a hybrid physics-informed machine learning version of a battery SOC model to predict voltage drop during discharge. The enhanced model captures the effect of wide variation of load levels, in the form of input current, which causes large thermal stress cycles. The cell temperature build-up during a discharge cycle is used to identify temperature-sensitive model parameters. Additionally, we enhance an aging model built upon cumulative energy drawn by introducing the effect of the load level. We then map cumulative energy and load level to battery capacity with a Gaussian process model. To validate our approach, we use a battery aging dataset collected on a self-developed testbed, where we used a wide current level range to age battery packs in accelerated fashion. Prediction results show that our model can be successfully calibrated and generalizes across all applied load levels.
150

Modeling Continental-Scale Outdoor Environmental Sound Levels with Limited Data

Pedersen, Katrina Lynn 01 January 2021 (has links) (PDF)
Modeling outdoor acoustic environments is a challenging problem because outdoor acoustic environments are the combination of diverse sources and propagation effects, including barriers to propagation such as buildings or vegetation. Outdoor acoustic environments are most commonly modeled on small geographic scales (e.g., within a single city). Extending modeling efforts to continental scales is particularly challenging due to an increase in the variety of geographic environments. Furthermore, acoustic data on which to train and validate models are expensive to collect and therefore relatively limited. It is unclear how models trained on this limited acoustic data will perform across continental-scales, which likely contain unique geographic regions which are not represented in the training data. In this dissertation, we consider the problem of continental-scale outdoor environmental sound level modeling using the contiguous United States for our area of study. We use supervised machine learning methods to produce models of various acoustic metrics and unsupervised learning methods to study the natural structures in geospatial data. We present a validation study of two continental-scale models which demonstrates that there is a need for better uncertainty quantification and tools to guide data collection. Using ensemble models, we investigate methods for quantifying uncertainty in continental-scale models. We also study methods of improving model accuracy, including dimensionality reduction, and explore the feasibility of predicting hourly spectral levels.

Page generated in 0.2858 seconds