• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 129
  • 23
  • 16
  • 8
  • 1
  • Tagged with
  • 248
  • 248
  • 64
  • 58
  • 53
  • 37
  • 37
  • 36
  • 34
  • 29
  • 28
  • 26
  • 25
  • 25
  • 25
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
221

Otimização robusta multiobjetivo por análise de intervalo não probabilística : uma aplicação em conforto e segurança veicular sob dinâmica lateral e vertical acoplada

Drehmer, Luis Roberto Centeno January 2017 (has links)
Esta Tese propõe uma nova ferramenta para Otimização Robusta Multiobjetivo por Análise de Intervalo Não Probabilística (Non-probabilistic Interval Analysis for Multiobjective Robust Design Optimization ou NPIA-MORDO). A ferramenta desenvolvida visa à otimização dos parâmetros concentrados de suspensão em um modelo veicular completo, submetido a uma manobra direcional percorrendo diferentes perfis de pista, a fim de garantir maior conforto e segurança ao motorista. O modelo multicorpo possui 15 graus de liberdade (15-GDL), dentre os quais onze pertencem ao veículo e assento, e quatro, ao modelo biodinâmico do motorista. A função multiobjetivo é composta por objetivos conflitantes e as suas tolerâncias, como a raiz do valor quadrático médio (root mean square ou RMS) da aceleração lateral e da aceleração vertical do assento do motorista, desenvolvidas durante a manobra de dupla troca de faixa (Double Lane Change ou DLC). O curso da suspensão e a aderência dos pneus à pista são tratados como restrições do problema de otimização. As incertezas são quantificadas no comportamento do sistema pela análise de intervalo não probabilística, por intermédio do Método dos Níveis de Corte-α (α-Cut Levels) para o nível α zero (de maior dispersão), e realizada concomitantemente ao processo de otimização multiobjetivo. Essas incertezas são aplicáveis tanto nos parâmetros do problema quanto nas variáveis de projeto. Para fins de validação do modelo, desenvolvido em ambiente MATLAB®, a trajetória do centro de gravidade da carroceria durante a manobra é comparada com o software CARSIM®, assim como as forças laterais e verticais dos pneus. Os resultados obtidos são exibidos em diversos gráficos a partir da fronteira de Pareto entre os múltiplos objetivos do modelo avaliado Os indivíduos da fronteira de Pareto satisfazem as condições do problema, e a função multiobjetivo obtida pela agregação dos múltiplos objetivos resulta em uma diferença de 1,66% entre os indivíduos com o menor e o maior valor agregado obtido. A partir das variáveis de projeto do melhor indivíduo da fronteira, gráficos são gerados para cada grau de liberdade do modelo, ilustrando o histórico dos deslocamentos, velocidades e acelerações. Para esse caso, a aceleração RMS vertical no assento do motorista é de 1,041 m/s² e a sua tolerância é de 0,631 m/s². Já a aceleração RMS lateral no assento do motorista é de 1,908 m/s² e a sua tolerância é de 0,168 m/s². Os resultados obtidos pelo NPIA-MORDO confirmam que é possível agregar as incertezas dos parâmetros e das variáveis de projeto à medida que se realiza a otimização externa, evitando a necessidade de análises posteriores de propagação de incertezas. A análise de intervalo não probabilística empregada pela ferramenta é uma alternativa viável de medida de dispersão se comparada com o desvio padrão, por não utilizar uma função de distribuição de probabilidades prévia e por aproximar-se da realidade na indústria automotiva, onde as tolerâncias são preferencialmente utilizadas. / This thesis proposes the development of a new tool for Non-probabilistic Interval Analysis for Multi-objective Robust Design Optimization (NPIA-MORDO). The developed tool aims at optimizing the lumped parameters of suspension in a full vehicle model, subjected to a double-lane change (DLC) maneuver throughout different random road profiles, to ensure comfort and safety to the driver. The multi-body model has 15 degrees of freedom (15-DOF) where 11-DOF represents the vehicle and its seat and 4-DOF represents the driver's biodynamic model. A multi-objective function is composed by conflicted objectives and their tolerances, like the root mean square (RMS) lateral and vertical acceleration in the driver’s seat, both generated during the double-lane change maneuver. The suspension working space and the road holding capacity are used as constraints for the optimization problem. On the other hand, the uncertainties in the system are quantified using a non-probabilistic interval analysis with the α-Cut Levels Method for zero α-level (the most uncertainty one), performed concurrently in the multi-objective optimization process. These uncertainties are both applied to the system parameters and design variables to ensure the robustness in results. For purposes of validation in the model, developed in MATLAB®, the path of the car’s body center of gravity during the maneuver is compared with the commercial software CARSIM®, as well as the lateral and vertical forces from the tires. The results are showed in many graphics obtained from the Pareto front between the multiple conflicting objectives of the evaluated model. The obtained solutions from the Pareto Front satisfy the conditions of the evaluated problem, and the aggregated multi-objective function results in a difference of 1.66% for the worst to the best solution. From the design variables of the best solution choose from the Pareto front, graphics are created for each degree of freedom, showing the time histories for displacements, velocities and accelerations. In this particular case, the RMS vertical acceleration in the driver’s seat is 1.041 m/s² and its tolerance is 0.631 m/s², but the RMS lateral acceleration in the driver’s seat is 1.908 m/s² and its tolerance is 0.168 m/s². The overall results obtained from NPIA-MORDO assure that is possible take into account the uncertainties from the system parameters and design variables as the external optimization loop is performed, reducing the efforts in subsequent evaluations. The non-probabilistic interval analysis performed by the proposed tool is a feasible choice to evaluate the uncertainty if compared to the standard deviation, because there is no need of previous well-known based probability distribution and because it reaches the practical needs from the automotive industry, where the tolerances are preferable.
222

Custom supply chain engineering : modeling and risk management : application to the customs / Ingénierie de la chaîne logistique douanière : modélisation et gestion de risques : application au cas des douanes

Hammadi, Lamia 10 December 2018 (has links)
La sécurité, la sûreté et l’efficacité de la chaîne logistique internationale revêtent une importance capitale pour le gouvernement, pour ses intérêts financiers et économiques et pour la sécurité de ses résidents. À cet égard, la société est confrontée à des multiples menaces, telles que le trafic illicite de drogues, d’armes ou autre type de contrebande, ainsi que la contrefaçon et la fraude commerciale. Pour contrer (détecter, prévenir, enquêter et atténuer) ces menaces, le rôle des douanes se pose en tant que gardiens du commerce international et acteurs principaux de la sécurisation de la chaîne logistique internationale. Les douanes interviennent à tous les stades de l'acheminement des marchandises ; toutes les transactions en provenance ou à destination des pays doivent être traitées par leurs services douaniers. Dans un tel environnement, les douanes deviennent un élément essentiel de la chaîne logistique. Nous adoptons ce point de vue, avec un accent particulier sur les opérations douanières et, pour souligner cet objectif, nous appelons cette analyse "chaîne logistique douanière". Dans cette thèse, nous avons tout d’abord mis en place le concept de chaîne logistique douanière, en identifiant les acteurs et les liens structurels entre eux, puis en établissant la cartographie des processus, l’approche d’intégration et le modèle de mesure de performance du concept proposé. Deuxièmement, nous développons une nouvelle approche de gestion de risques dans la chaîne logistique douanière basée sur une approche qualitative. Une telle approche conduit à identifier les classes de risques et à recommander les meilleures solutions afin de réduire le niveau de risque. Notre approche est appliquée dans la douane Marocaine en considérant la criticité comme un indicateur de risque en premier temps, en appliquant la méthode AMDEC (Analyse des modes de défaillance, de leurs effets et de leur criticité) et la méthode ABC croisée et le poids prioritaire en deuxième temps, en utilisant la méthode AHP (Analytic Hierarchy Process) et la méthode AHP floue (c.-à-d. Évaluation de risques sous incertitude); puis une analyse comparative des deux indicateurs est effectuée afin d’examiner l’efficacité des résultats obtenus. Enfin, nous développons des modèles stochastiques pour les séries chronologiques de risques qui abordent le défi le plus important de la modélisation de risques dans le contexte douanier : la Saisonnalité. Plus précisément, nous proposons d’une part des modèles basés sur la quantification des incertitudes pour décrire les comportements mensuels. Les différents modèles sont ajustés en utilisant la méthode de coïncidence des moments sur des séries temporelles de quantités saisies du trafic illicite dans cinq sites. D'autre part, des modèles de Markov cachés sont ajustés à l'aide de l'algorithme EM sur les mêmes séquences d’observations. Nous montrons que nos modèles permettent avec précision de gérer et de décrire les composantes saisonnières des séries chronologiques de risques dans le contexte douanier. On montre également que les modèles ajustés sont interprétables et fournissent une bonne description des propriétés importantes des données, telles que la structure du second ordre et les densités de probabilité par saison et par site. / The security, safety and efficiency of the international supply chain are of central importance for the governments, for their financial and economic interests and for the security of its residents. In this regard, the society faces multiple threats, such as illicit traffic of drugs, arms and other contraband, as well as counterfeiting and commercial fraud. For countering (detecting, preventing, investigating and mitigating) such threats, the role of customs arises as the gatekeepers of international trade and the main actor in securing the international supply chain. Customs intervene in all stages along the routing of cargo; all transactions leaving or entering the country must be processed by the custom agencies. In such an environment, customs become an integral thread within the supply chain. We adopt this point of view, with a particular focus on customs operations and, in order to underline this focus, we refer to this analysis as “customs supply chain”. In this thesis, we firstly set up the concept of customs supply chain, identify the actors and structural links between them, then establish the process mapping, integration approach and performance model. Secondly, we develop a new approach for managing risks in customs supply chain based on qualitative analysis. Such an approach leads to identify the risk classes as well as recommend best possible solutions to reduce the risk level. Our approach is applied in Moroccan customs by considering the criticality as a risk indicator. In a first time we use Failure Modes Effects Criticality Analysis (FMECA) and Cross Activity Based Costing (ABC) Method and priority weight; in the second time we use Analytic Hierarchy Process (AHP) and Fuzzy AHP (i.e., risk assessment under uncertainty); then a benchmarking of the two indicators is conducted in order to examine the effectiveness of the obtained results. Finally, we develop stochastic models for risk time series that address the most important challenge of risk modeling in the customs context: Seasonality. To be more specific, we propose on the one hand, models based on uncertainty quantification to describe monthly components. The different models are fitted using Moment Matching method to the time series of seized quantities of the illicit traffic on five sites. On the other hand, Hidden Markov Models which are fitted using the EM-algorithm on the same observation sequences. We show that these models allow to accurately handle and describe the seasonal components of risk time series in customs context. It is also shown that the fitted models can be easily interpreted and provide a good description of important properties of the data such as the second-order structure and Probability Density Function (PDFs) per season per site.
223

Data-driven Uncertainty Analysis in Neural Networks with Applications to Manufacturing Process Monitoring

Bin Zhang (11073474) 12 August 2021 (has links)
<p>Artificial neural networks, including deep neural networks, play a central role in data-driven science due to their superior learning capacity and adaptability to different tasks and data structures. However, although quantitative uncertainty analysis is essential for training and deploying reliable data-driven models, the uncertainties in neural networks are often overlooked or underestimated in many studies, mainly due to the lack of a high-fidelity and computationally efficient uncertainty quantification approach. In this work, a novel uncertainty analysis scheme is developed. The Gaussian mixture model is used to characterize the probability distributions of uncertainties in arbitrary forms, which yields higher fidelity than the presumed distribution forms, like Gaussian, when the underlying uncertainty is multimodal, and is more compact and efficient than large-scale Monte Carlo sampling. The fidelity of the Gaussian mixture is refined through adaptive scheduling of the width of each Gaussian component based on the active assessment of the factors that could deteriorate the uncertainty representation quality, such as the nonlinearity of activation functions in the neural network. </p> <p>Following this idea, an adaptive Gaussian mixture scheme of nonlinear uncertainty propagation is proposed to effectively propagate the probability distributions of uncertainties through layers in deep neural networks or through time in recurrent neural networks. An adaptive Gaussian mixture filter (AGMF) is then designed based on this uncertainty propagation scheme. By approximating the dynamics of a highly nonlinear system with a feedforward neural network, the adaptive Gaussian mixture refinement is applied at both the state prediction and Bayesian update steps to closely track the distribution of unmeasurable states. As a result, this new AGMF exhibits state-of-the-art accuracy with a reasonable computational cost on highly nonlinear state estimation problems subject to high magnitudes of uncertainties. Next, a probabilistic neural network with Gaussian-mixture-distributed parameters (GM-PNN) is developed. The adaptive Gaussian mixture scheme is extended to refine intermediate layer states and ensure the fidelity of both linear and nonlinear transformations within the network so that the predictive distribution of output target can be inferred directly without sampling or approximation of integration. The derivatives of the loss function with respect to all the probabilistic parameters in this network are derived explicitly, and therefore, the GM-PNN can be easily trained with any backpropagation method to address practical data-driven problems subject to uncertainties.</p> <p>The GM-PNN is applied to two data-driven condition monitoring schemes of manufacturing processes. For tool wear monitoring in the turning process, a systematic feature normalization and selection scheme is proposed for the engineering of optimal feature sets extracted from sensor signals. The predictive tool wear models are established using two methods, one is a type-2 fuzzy network for interval-type uncertainty quantification and the other is the GM-PNN for probabilistic uncertainty quantification. For porosity monitoring in laser additive manufacturing processes, convolutional neural network (CNN) is used to directly learn patterns from melt-pool patterns to predict porosity. The classical CNN models without consideration of uncertainty are compared with the CNN models in which GM-PNN is embedded as an uncertainty quantification module. For both monitoring schemes, experimental results show that the GM-PNN not only achieves higher prediction accuracies of process conditions than the classical models but also provides more effective uncertainty quantification to facilitate the process-level decision-making in the manufacturing environment.</p><p>Based on the developed uncertainty analysis methods and their proven successes in practical applications, some directions for future studies are suggested. Closed-loop control systems may be synthesized by combining the AGMF with data-driven controller design. The AGMF can also be extended from a state estimator to the parameter estimation problems in data-driven models. In addition, the GM-PNN scheme may be expanded to directly build more complicated models like convolutional or recurrent neural networks.</p>
224

DEVELOPMENT OF IMAGE-BASED DENSITY DIAGNOSTICS WITH BACKGROUND-ORIENTED SCHLIEREN AND APPLICATION TO PLASMA INDUCED FLOW

Lalit Rajendran (8960978) 07 May 2021 (has links)
<p>There is growing interest in the use of nanosecond surface dielectric barrier discharge (ns-SDBD) actuators for high-speed (supersonic/hypersonic) flow control. A plasma discharge is created using a nanosecond-duration pulse of several kilovolts, and leads to a rapid heat release and a complex three-dimensional flow field. Past work has been limited to qualitative visualizations such as schlieren imaging, and detailed measurements of the induced flow are required to develop a mechanistic model of the actuator performance. </p><p><br></p><p></p><p>Background-Oriented Schlieren (BOS) is a quantitative variant of schlieren imaging and measures density gradients in a flow field by tracking the apparent distortion of a target dot pattern. The distortion is estimated by cross-correlation, and the density gradients can be integrated spatially to obtain the density field. Owing to the simple setup and ease of use, BOS has been applied widely, and is becoming the preferred density measurement technique. However, there are several unaddressed limitations with potential for improvement, especially for application to complex flow fields such as those induced by plasma actuators. </p><p></p><p>This thesis presents a series of developments aimed at improving the various aspects of the BOS measurement chain to provide an overall improvement in the accuracy, precision, spatial resolution and dynamic range. A brief summary of the contributions are: </p><p>1) a synthetic image generation methodology to perform error and uncertainty analysis for PIV/BOS experiments, </p><p>2) an uncertainty quantification methodology to report local, instantaneous, a-posteriori uncertainty bounds on the density field, by propagating displacement uncertainties through the measurement chain,</p><p>3) an improved displacement uncertainty estimation method using a meta-uncertainty framework whereby uncertainties estimated by different methods are combined based on the sensitivities to image perturbations, </p><p>4) the development of a Weighted Least Squares-based density integration methodology to reduce the sensitivity of the density estimation procedure to measurement noise.</p><p>5) a tracking-based processing algorithm to improve the accuracy, precision and spatial resolution of the measurements, </p><p>6) a theoretical model of the measurement process to demonstrate the effect of density gradients on the position uncertainty, and an uncertainty quantification methodology for tracking-based BOS,</p><p>Then the improvements to BOS are applied to perform a detailed characterization of the flow induced by a filamentary surface plasma discharge to develop a reduced-order model for the length and time scales of the induced flow. The measurements show that the induced flow consists of a hot gas kernel filled with vorticity in a vortex ring that expands and cools over time. A reduced-order model is developed to describe the induced flow and applying the model to the experimental data reveals that the vortex ring's properties govern the time scale associated with the kernel dynamics. The model predictions for the actuator-induced flow length and time scales can guide the choice of filament spacing and pulse frequencies for practical multi-pulse ns-SDBD configurations.</p>
225

Towards robust prediction of the dynamics of the Antarctic ice sheet: Uncertainty quantification of sea-level rise projections and grounding-line retreat with essential ice-sheet models / Vers des prédictions robustes de la dynamique de la calotte polaire de l'Antarctique: Quantification de l'incertitude sur les projections de l'augmentation du niveau des mers et du retrait de la ligne d'ancrage à l'aide de modèles glaciologiques essentiels

Bulthuis, Kevin 29 January 2020 (has links) (PDF)
Recent progress in the modelling of the dynamics of the Antarctic ice sheet has led to a paradigm shift in the perception of the Antarctic ice sheet in a changing climate. New understanding of the dynamics of the Antarctic ice sheet now suggests that the response of the Antarctic ice sheet to climate change will be driven by instability mechanisms in marine sectors. As concerns have grown about the response of the Antarctic ice sheet in a warming climate, interest has grown simultaneously in predicting with quantified uncertainty the evolution of the Antarctic ice sheet and in clarifying the role played by uncertainties in predicting the response of the Antarctic ice sheet to climate change. Essential ice-sheet models have recently emerged as computationally efficient ice-sheet models for large-scale and long-term simulations of the ice-sheet dynamics and integration into Earth system models. Essential ice-sheet models, such as the fast Elementary Thermomechanical Ice Sheet (f.ETISh) model developed at the Université Libre de Bruxelles, achieve computational tractability by representing essential mechanisms and feedbacks of ice-sheet thermodynamics through reduced-order models and appropriate parameterisations. Given their computational tractability, essential ice-sheet models combined with methods from the field of uncertainty quantification provide opportunities for more comprehensive analyses of the impact of uncertainty in ice-sheet models and for expanding the range of uncertainty quantification methods employed in ice-sheet modelling. The main contributions of this thesis are twofold. On the one hand, we contribute a new assessment and new understanding of the impact of uncertainties on the multicentennial response of the Antarctic ice sheet. On the other hand, we contribute new methods for uncertainty quantification of geometrical characteristics of the spatial response of physics-based computational models, with, as a motivation in glaciology, a focus on predicting with quantified uncertainty the retreat of the grounded region of the Antarctic ice sheet. For the first contribution, we carry out new probabilistic projections of the multicentennial response of the Antarctic ice sheet to climate change using the f.ETISh model. We apply methods from the field of uncertainty quantification to the f.ETISh model to investigate the influence of several sources of uncertainty, namely sources of uncertainty in atmospheric forcing, basal sliding, grounding-line flux parameterisation, calving, sub-shelf melting, ice-shelf rheology, and bedrock relation, on the continental response on the Antarctic ice sheet. We provide new probabilistic projections of the contribution of the Antarctic ice sheet to future sea-level rise; we carry out stochastic sensitivity analysis to determine the most influential sources of uncertainty; and we provide new probabilistic projections of the retreat of the grounded portion of the Antarctic ice sheet. For the second contribution, we propose to address uncertainty quantification of geometrical characteristics of the spatial response of physics-based computational models within the probabilistic context of the random set theory. We contribute to the development of the concept of confidence sets that either contain or are contained within an excursion set of the spatial response with a specified probability level. We propose a new multifidelity quantile-based method for the estimation of such confidence sets and we demonstrate the performance of the proposed method on an application concerned with predicting with quantified uncertainty the retreat of the Antarctic ice sheet. In addition to these two main contributions, we contribute to two additional pieces of research pertaining to the computation of Sobol indices in global sensitivity analysis in small-data settings using the recently introduced probabilistic learning on manifolds (PLoM) and to a multi-model comparison of the projections of the contribution of the Antarctic ice sheet to global mean sea-level rise. / Les progrès récents effectués dans la modélisation de la dynamique de la calotte polaire de l'Antarctique ont donné lieu à un changement de paradigme vis-à-vis de la perception de la calotte polaire de l'Antarctique face au changement climatique. Une meilleure compréhension de la dynamique de la calotte polaire de l'Antarctique suggère désormais que la réponse de la calotte polaire de l'Antarctique au changement climatique sera déterminée par des mécanismes d'instabilité dans les régions marines. Tandis qu'un nouvel engouement se porte sur une meilleure compréhension de la réponse de la calotte polaire de l'Antarctique au changement climatique, un intérêt particulier se porte simultanément vers le besoin de quantifier les incertitudes sur l'évolution de la calotte polaire de l'Antarctique ainsi que de clarifier le rôle joué par les incertitudes sur le comportement de la calotte polaire de l'Antarctique en réponse au changement climatique. D'un point de vue numérique, les modèles glaciologiques dits essentiels ont récemment été développés afin de fournir des modèles numériques efficaces en temps de calcul dans le but de réaliser des simulations à grande échelle et sur le long terme de la dynamique des calottes polaires ainsi que dans l'optique de coupler le comportement des calottes polaires avec des modèles globaux du sytème terrestre. L'efficacité en temps de calcul de ces modèles glaciologiques essentiels, tels que le modèle f.ETISh (fast Elementary Thermomechanical Ice Sheet) développé à l'Université Libre de Bruxelles, repose sur une modélisation des mécanismes et des rétroactions essentiels gouvernant la thermodynamique des calottes polaires au travers de modèles d'ordre réduit et de paramétrisations. Vu l'efficacité en temps de calcul des modèles glaciologiques essentiels, l'utilisation de ces modèles en complément des méthodes du domaine de la quantification des incertitudes offrent de nombreuses opportunités afin de mener des analyses plus complètes de l'impact des incertitudes dans les modèles glaciologiques ainsi que de développer de nouvelles méthodes du domaine de la quantification des incertitudes dans le cadre de la modélisation glaciologique. Les contributions de cette thèse sont doubles. D'une part, nous contribuons à une nouvelle estimation et une nouvelle compréhension de l'impact des incertitudes sur la réponse de la calotte polaire de l'Antarctique dans les prochains siècles. D'autre part, nous contribuons au développement de nouvelles méthodes pour la quantification des incertitudes sur les caractéristiques géométriques de la réponse spatiale de modèles physiques numériques avec, comme motivation en glaciologie, un intérêt particulier vers la prédiction sous incertitudes du retrait de la région de la calotte polaire de l'Antarctique en contact avec le lit rocheux. Dans le cadre de la première contribution, nous réalisons de nouvelles projections probabilistes de la réponse de la calotte polaire de l'Antarctique au changement climatique au cours des prochains siècles à l'aide du modèle numérique f.ETISh. Nous appliquons des méthodes du domaine de la quantification des incertitudes au modèle numérique f.ETISh afin d'étudier l'impact de différentes sources d'incertitude sur la réponse continentale de la calotte polaire de l'Antarctique. Les sources d'incertitude étudiées sont relatives au forçage atmosphérique, au glissement basal, à la paramétrisation du flux à la ligne d'ancrage, au vêlage, à la fonte sous les barrières de glace, à la rhéologie des barrières de glace et à la relaxation du lit rocheux. Nous réalisons de nouvelles projections probabilistes de la contribution de la calotte polaire de l'Antarctique à l'augmentation future du niveau des mers; nous réalisons une analyse de sensibilité afin de déterminer les sources d'incertitude les plus influentes; et nous réalisons de nouvelles projections probabilistes du retrait de la région de la calotte polaire de l'Antarctique en contact avec le lit rocheux.Dans le cadre de la seconde contribution, nous étudions la quantification des incertitudes sur les caractéristiques géométriques de la réponse spatiale de modèles physiques numériques dans le cadre de la théorie des ensembles aléatoires. Dans le cadre de la théorie des ensembles aléatoires, nous développons le concept de régions de confiance qui contiennent ou bien sont inclus dans un ensemble d'excursion de la réponse spatiale du modèle numérique avec un niveau donné de probabilité. Afin d'estimer ces régions de confiance, nous proposons de formuler l'estimation de ces régions de confiance dans une famille d'ensembles paramétrés comme un problème d'estimation de quantiles d'une variable aléatoire et nous proposons une nouvelle méthode de type multifidélité pour estimer ces quantiles. Finalement, nous démontrons l'efficacité de cette nouvelle méthode dans le cadre d'une application relative au retrait de la région de la calotte polaire de l'Antarctique en contact avec le lit rocheux. En plus de ces deux contributions principales, nous contribuons à deux travaux de recherche additionnels. D'une part, nous contribuons à un travail de recherche relatif au calcul des indices de Sobol en analyse de sensibilité dans le cadre de petits ensembles de données à l'aide d'une nouvelle méthode d'apprentissage probabiliste sur des variétés géométriques. D'autre part, nous fournissons une comparaison multimodèle de différentes projections de la contribution de la calotte polaire de l'Antarctique à l'augmentation du niveau des mers. / Doctorat en Sciences / info:eu-repo/semantics/nonPublished
226

Shape optimization of lightweight structures under blast loading

Israel, Joshua James 05 1900 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / Structural optimization of vehicle components for blast mitigation seeks to counteract the damaging effects of an impulsive threat on occupants and critical components. The strong and urgent need for improved protection from blast events has made blast mitigating component design an active research subject. Standard up-armoring of ground vehicles can significantly increase the mass of the vehicle. Without concurrent modifications to the power train, suspension, braking and steering components, the up-armored vehicles suffer from degraded stability and mobility. For these reasons, there is a critical need for effective methods to generate lightweight components for blast mitigation. The overall objective of this research is to make advances in structural design methods for the optimization of lightweight blast-mitigating systems. This thesis investigates the automated design process of isotropic plates to mitigate the effects of blast loading by addressing the design of blast-protective structures from a design optimization perspective. The general design problem is stated as finding the optimum shape of a protective shell of minimum mass satisfying deformation and envelops constraints. This research was conducted in terms of three primary research projects. The first project was to investigate the design of lightweight structures under deterministic loading conditions and subject to the same objective function and constraints, in order to compare feasible design methodologies through the expansion of the problem dimension in order to reach the limits of performance. The second research project involved the investigation of recently developed uncertainty quantification methods, the univariate dimensional reduction method and the performance moment integration method, to structures under stochastic loading conditions. The third research project involved application of these uncertainty quantification methods to problems of design optimization under uncertainty, in order to develop a methodology for the generation of lightweight reliable structures. This research has resulted in the construction of a computational framework, incorporating uncertainty quantification methods and various optimization techniques, which can be used for the generation of lightweight structures for blast mitigation under uncertainty. Applied to practical structural design problems, the results demonstrate that the methodologies provide a practical tool to aid the design engineer in generating design concepts for blast-mitigating structures. These methods can be used to advance research into the generation of reliable structures under uncertain loading conditions inherent to blast events.
227

Probabilistic Ensemble-based Streamflow Forecasting Framework

Darbandsari, Pedram January 2021 (has links)
Streamflow forecasting is a fundamental component of various water resources management systems, ranging from flood control and mitigation to long-term planning of irrigation and hydropower systems. In the context of floods, a probabilistic forecasting system is required for proper and effective decision-making. Therefore, the primary goal of this research is the development of an advanced ensemble-based streamflow forecasting framework to better quantify the predictive uncertainty and generate enhanced probabilistic forecasts. This research started by comprehensively evaluating the performances of various lumped conceptual models in data-poor watersheds and comparing various Bayesian Model Averaging (BMA) modifications for probabilistic streamflow simulation. Then, using the concept of BMA, two novel probabilistic post-processing approaches were developed to enhance streamflow forecasting performance. The combination of the entropy theory and the BMA method leads to an entropy-based Bayesian Model Averaging (En-BMA) approach for enhanced probabilistic streamflow and precipitation forecasting. Also, the integration of the Hydrologic Uncertainty Processor (HUP) and the BMA methods is proposed for probabilistic post-processing of multi-model streamflow forecasts. Results indicated that the MACHBV and GR4J models are highly competent in simulating hydrological processes within data-scarce watersheds, however, the presence of the lower skill hydrologic models is still beneficial for ensemble-based streamflow forecasting. The comprehensive verification of the BMA approach in terms of streamflow predictions has identified the merits of implementing some of the previously recommended modifications and showed the importance of possessing a mutually exclusive and collectively exhaustive ensemble. By targeting the remaining limitation of the BMA approach, the proposed En-BMA method can improve probabilistic streamflow forecasting, especially under high flow conditions. Also, the proposed HUP-BMA approach has taken advantage of both HUP and BMA methods to better quantify the hydrologic uncertainty. Moreover, the applicability of the modified En-BMA as a more robust post-processing approach for precipitation forecasting, compared to BMA, has been demonstrated. / Thesis / Doctor of Philosophy (PhD) / Possessing a reliable streamflow forecasting framework is of special importance in various fields of operational water resources management, non-structural flood mitigation in particular. Accurate and reliable streamflow forecasts lead to the best possible in-advanced flood control decisions which can significantly reduce its consequent loss of lives and properties. The main objective of this research is to develop an enhanced ensemble-based probabilistic streamflow forecasting approach through proper quantification of predictive uncertainty using an ensemble of streamflow forecasts. The key contributions are: (1) implementing multiple diverse forecasts with full coverage of future possibilities in the Bayesian ensemble-based forecasting method to produce more accurate and reliable forecasts; and (2) developing an ensemble-based Bayesian post-processing approach to enhance the hydrologic uncertainty quantification by taking the advantages of multiple forecasts and initial flow observation. The findings of this study are expected to benefit streamflow forecasting, flood control and mitigation, and water resources management and planning.
228

Numerical Simulation of Short Fibre Reinforced Composites

Springer, Rolf 09 November 2023 (has links)
Lightweight structures became more and more important over the last years. One special class of such structures are short fibre reinforced composites, produced by injection moulding. To avoid expensive experiments for testing the mechanical behaviour of these composites proper material models are needed. Thereby, the stochastic nature of the fibre orientation is the main problem. In this thesis it is looked onto the simulation of such materials in a linear thermoelastic setting. This means the material is described by its heat conduction tensor κ(p), its thermal expansion tensor T(p), and its stiffness tensor C(p). Due to the production process the internal fibre orientation p has to been understood as random variable. As a consequence the previously mentioned material quantities also become random. The classical approach is to average these quantities and solve the linear hermoelastic deformation problem with the averaged expressions. Within this thesis the incorpora- tion of this approach in a time and memory efficient manner in an existing finite element software is shown. Especially for the time and memory efficient improvement several implementation aspects of the underlying software are highlighted. For both - the classical material simulation as well as the time efficient improvement of the software - numerical results are shown. Furthermore, the aforementioned classical approach is extended within this thesis for the simulation of the thermal stresses by using the stochastic nature of the heat conduc tion. This is done by developing it into a series w.r.t. the underlying stochastic. For this series known results from uncertainty quantification are applied. With the help of these results the temperature is developed in a Taylor series. For this Taylor series a suitable expansion point is chosen. Afterwards, this series is incorporated into the computation of the thermal stresses. The advantage of this approach is shown in numerical experiments.
229

Mean square solutions of random linear models and computation of their probability density function

Jornet Sanz, Marc 05 March 2020 (has links)
[EN] This thesis concerns the analysis of differential equations with uncertain input parameters, in the form of random variables or stochastic processes with any type of probability distributions. In modeling, the input coefficients are set from experimental data, which often involve uncertainties from measurement errors. Moreover, the behavior of the physical phenomenon under study does not follow strict deterministic laws. It is thus more realistic to consider mathematical models with randomness in their formulation. The solution, considered in the sample-path or the mean square sense, is a smooth stochastic process, whose uncertainty has to be quantified. Uncertainty quantification is usually performed by computing the main statistics (expectation and variance) and, if possible, the probability density function. In this dissertation, we study random linear models, based on ordinary differential equations with and without delay and on partial differential equations. The linear structure of the models makes it possible to seek for certain probabilistic solutions and even approximate their probability density functions, which is a difficult goal in general. A very important part of the dissertation is devoted to random second-order linear differential equations, where the coefficients of the equation are stochastic processes and the initial conditions are random variables. The study of this class of differential equations in the random setting is mainly motivated because of their important role in Mathematical Physics. We start by solving the randomized Legendre differential equation in the mean square sense, which allows the approximation of the expectation and the variance of the stochastic solution. The methodology is extended to general random second-order linear differential equations with analytic (expressible as random power series) coefficients, by means of the so-called Fröbenius method. A comparative case study is performed with spectral methods based on polynomial chaos expansions. On the other hand, the Fröbenius method together with Monte Carlo simulation are used to approximate the probability density function of the solution. Several variance reduction methods based on quadrature rules and multilevel strategies are proposed to speed up the Monte Carlo procedure. The last part on random second-order linear differential equations is devoted to a random diffusion-reaction Poisson-type problem, where the probability density function is approximated using a finite difference numerical scheme. The thesis also studies random ordinary differential equations with discrete constant delay. We study the linear autonomous case, when the coefficient of the non-delay component and the parameter of the delay term are both random variables while the initial condition is a stochastic process. It is proved that the deterministic solution constructed with the method of steps that involves the delayed exponential function is a probabilistic solution in the Lebesgue sense. Finally, the last chapter is devoted to the linear advection partial differential equation, subject to stochastic velocity field and initial condition. We solve the equation in the mean square sense and provide new expressions for the probability density function of the solution, even in the non-Gaussian velocity case. / [ES] Esta tesis trata el análisis de ecuaciones diferenciales con parámetros de entrada aleatorios, en la forma de variables aleatorias o procesos estocásticos con cualquier tipo de distribución de probabilidad. En modelización, los coeficientes de entrada se fijan a partir de datos experimentales, los cuales suelen acarrear incertidumbre por los errores de medición. Además, el comportamiento del fenómeno físico bajo estudio no sigue patrones estrictamente deterministas. Es por tanto más realista trabajar con modelos matemáticos con aleatoriedad en su formulación. La solución, considerada en el sentido de caminos aleatorios o en el sentido de media cuadrática, es un proceso estocástico suave, cuya incertidumbre se tiene que cuantificar. La cuantificación de la incertidumbre es a menudo llevada a cabo calculando los principales estadísticos (esperanza y varianza) y, si es posible, la función de densidad de probabilidad. En este trabajo, estudiamos modelos aleatorios lineales, basados en ecuaciones diferenciales ordinarias con y sin retardo, y en ecuaciones en derivadas parciales. La estructura lineal de los modelos nos permite buscar ciertas soluciones probabilísticas e incluso aproximar su función de densidad de probabilidad, lo cual es un objetivo complicado en general. Una parte muy importante de la disertación se dedica a las ecuaciones diferenciales lineales de segundo orden aleatorias, donde los coeficientes de la ecuación son procesos estocásticos y las condiciones iniciales son variables aleatorias. El estudio de esta clase de ecuaciones diferenciales en el contexto aleatorio está motivado principalmente por su importante papel en la Física Matemática. Empezamos resolviendo la ecuación diferencial de Legendre aleatorizada en el sentido de media cuadrática, lo que permite la aproximación de la esperanza y la varianza de la solución estocástica. La metodología se extiende al caso general de ecuaciones diferenciales lineales de segundo orden aleatorias con coeficientes analíticos (expresables como series de potencias), mediante el conocido método de Fröbenius. Se lleva a cabo un estudio comparativo con métodos espectrales basados en expansiones de caos polinomial. Por otro lado, el método de Fröbenius junto con la simulación de Monte Carlo se utilizan para aproximar la función de densidad de probabilidad de la solución. Para acelerar el procedimiento de Monte Carlo, se proponen varios métodos de reducción de la varianza basados en reglas de cuadratura y estrategias multinivel. La última parte sobre ecuaciones diferenciales lineales de segundo orden aleatorias estudia un problema aleatorio de tipo Poisson de difusión-reacción, en el que la función de densidad de probabilidad es aproximada mediante un esquema numérico de diferencias finitas. En la tesis también se tratan ecuaciones diferenciales ordinarias aleatorias con retardo discreto y constante. Estudiamos el caso lineal y autónomo, cuando el coeficiente de la componente no retardada i el parámetro del término retardado son ambos variables aleatorias mientras que la condición inicial es un proceso estocástico. Se demuestra que la solución determinista construida con el método de los pasos y que involucra la función exponencial retardada es una solución probabilística en el sentido de Lebesgue. Finalmente, el último capítulo lo dedicamos a la ecuación en derivadas parciales lineal de advección, sujeta a velocidad y condición inicial estocásticas. Resolvemos la ecuación en el sentido de media cuadrática y damos nuevas expresiones para la función de densidad de probabilidad de la solución, incluso en el caso de velocidad no Gaussiana. / [CA] Aquesta tesi tracta l'anàlisi d'equacions diferencials amb paràmetres d'entrada aleatoris, en la forma de variables aleatòries o processos estocàstics amb qualsevol mena de distribució de probabilitat. En modelització, els coeficients d'entrada són fixats a partir de dades experimentals, les quals solen comportar incertesa pels errors de mesurament. A més a més, el comportament del fenomen físic sota estudi no segueix patrons estrictament deterministes. És per tant més realista treballar amb models matemàtics amb aleatorietat en la seua formulació. La solució, considerada en el sentit de camins aleatoris o en el sentit de mitjana quadràtica, és un procés estocàstic suau, la incertesa del qual s'ha de quantificar. La quantificació de la incertesa és sovint duta a terme calculant els principals estadístics (esperança i variància) i, si es pot, la funció de densitat de probabilitat. En aquest treball, estudiem models aleatoris lineals, basats en equacions diferencials ordinàries amb retard i sense, i en equacions en derivades parcials. L'estructura lineal dels models ens fa possible cercar certes solucions probabilístiques i inclús aproximar la seua funció de densitat de probabilitat, el qual és un objectiu complicat en general. Una part molt important de la dissertació es dedica a les equacions diferencials lineals de segon ordre aleatòries, on els coeficients de l'equació són processos estocàstics i les condicions inicials són variables aleatòries. L'estudi d'aquesta classe d'equacions diferencials en el context aleatori està motivat principalment pel seu important paper en Física Matemàtica. Comencem resolent l'equació diferencial de Legendre aleatoritzada en el sentit de mitjana quadràtica, el que permet l'aproximació de l'esperança i la variància de la solució estocàstica. La metodologia s'estén al cas general d'equacions diferencials lineals de segon ordre aleatòries amb coeficients analítics (expressables com a sèries de potències), per mitjà del conegut mètode de Fröbenius. Es duu a terme un estudi comparatiu amb mètodes espectrals basats en expansions de caos polinomial. Per altra banda, el mètode de Fröbenius juntament amb la simulació de Monte Carlo són emprats per a aproximar la funció de densitat de probabilitat de la solució. Per a accelerar el procediment de Monte Carlo, es proposen diversos mètodes de reducció de la variància basats en regles de quadratura i estratègies multinivell. L'última part sobre equacions diferencials lineals de segon ordre aleatòries estudia un problema aleatori de tipus Poisson de difusió-reacció, en què la funció de densitat de probabilitat és aproximada mitjançant un esquema numèric de diferències finites. En la tesi també es tracten equacions diferencials ordinàries aleatòries amb retard discret i constant. Estudiem el cas lineal i autònom, quan el coeficient del component no retardat i el paràmetre del terme retardat són ambdós variables aleatòries mentre que la condició inicial és un procés estocàstic. Es prova que la solució determinista construïda amb el mètode dels passos i que involucra la funció exponencial retardada és una solució probabilística en el sentit de Lebesgue. Finalment, el darrer capítol el dediquem a l'equació en derivades parcials lineal d'advecció, subjecta a velocitat i condició inicial estocàstiques. Resolem l'equació en el sentit de mitjana quadràtica i donem noves expressions per a la funció de densitat de probabilitat de la solució, inclús en el cas de velocitat no Gaussiana. / This work has been supported by the Spanish Ministerio de Economía y Competitividad grant MTM2017–89664–P. I acknowledge the doctorate scholarship granted by Programa de Ayudas de Investigación y Desarrollo (PAID), Universitat Politècnica de València. / Jornet Sanz, M. (2020). Mean square solutions of random linear models and computation of their probability density function [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/138394
230

Uncertainty Quantification and Optimization Under Uncertainty Using Surrogate Models

Boopathy, Komahan 05 June 2014 (has links)
No description available.

Page generated in 0.1382 seconds