• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 24
  • 13
  • 10
  • 4
  • 4
  • 1
  • 1
  • Tagged with
  • 66
  • 31
  • 26
  • 24
  • 14
  • 12
  • 12
  • 11
  • 11
  • 8
  • 7
  • 7
  • 7
  • 7
  • 7
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

[en] INTEGRITY OF AN OFFSHORE STRUCTURE SUBJECTED TO WAVES / [pt] INTEGRIDADE DE UMA ESTRUTURA OFFSHORE SUJEITA À ONDAS

VICTOR FERNANDO DEORSOLA SACRAMENTO 11 April 2019 (has links)
[pt] Este trabalho apresenta um método para calcular a resistência à fadiga de uma torre de perfuração considerando a elevação da superfície do mar, a dinâmica da plataforma na qual a torre está instalada e a dinâmica da própria torre. Modelos de ordem reduzida são utilizados para obter a elevação da superfície do mar e a dinâmica torre, e as incertezas nos parâmetros dos componentes do sistema podem ser incluídas na análise também. As análises podem ser feitas para vários estados de mar, conforme sua distribuição de probabilidade, e nenhuma hipótese sobre a distribuição de probabilidade precisa ser feita inicialmente. O histograma de distribuição de ciclos de tensão para toda vida útil do equipamento é obtido usando um procedimento de contagem de ciclos Rainflow. Os resultados e as incertezas nos mesmos são discutidos. / [en] This work presents a method for evaluation of the fatigue resistance of a drilling tower considering the sea surface elevation, the dynamics of the platform on which the tower is installed and the dynamics of the tower itself. Reduced order models are used for obtaining the sea surface elevation and the dynamics of the tower, and the uncertainties on the parameters of the components of the system can be included in the analysis as well. The analysis can be done for several sea states, according its probability distribution, and no assumption about the probability distribution of the stress ranges has to be made previously. The histogram for the distribution of stress ranges for the entire working life of the equipment is obtained using a Rainflow technique. The results and the uncertainties on them are discussed.
32

[en] AN INTRODUCTION TO MODEL REDUCTION THROUGH THE KARHUNEN-LOÈVE EXPANSION / [pt] UMA INTRODUÇÃO À REDUÇÃO DE MODELOS ATRAVÉS DA EXPANSÃO DE KARHUNEN-LOÈVE

CLAUDIO WOLTER 10 April 2002 (has links)
[pt] Esta dissertação tem como principal objetivo estudar aplicações da expansão ou decomposição de Karhunen-Loève em dinâmica de estruturas. Esta técnica consiste, basicamente, na obtenção de uma decomposição linear da resposta dinâmica de um sistema qualquer, representado por um campo vetorial estocástico, tendo a importante propriedade de ser ótima, no sentido que dado um certo número de modos, nenhuma outra decomposição linear pode melhor representar esta resposta. Esta capacidade de compressão de informação faz desta decomposição uma poderosa ferramenta para a construção de modelos reduzidos para sistemas mecânicos em geral. Em particular, este trabalho aborda problemas em dinâmica estrutural, onde sua aplicação ainda é bem recente. Inicialmente, são apresentadas as principais hipóteses necessárias à aplicação da expansão de Karhunen-Loève, bem como duas técnicas existentes para sua implementação, com domínios distintos de utilização.É dada especial atenção à relação entre os modos empíricos fornecidos pela expansão e os modos de vibração intrínsecos a sistemas vibratórios lineares, tanto discretos quanto contínuos, exemplificados por uma treliça bidimensional e uma placa retangular. Na mesma linha, são discutidas as vantagens e desvantagens de se usar esta expansão como ferramenta alternativa à análise modal clássica. Como aplicação a sistemas não-lineares, é apresentado o estudo de um sistema de vibroimpacto definido por uma viga em balanço cujo deslocamento transversal é limitado por dois batentes elásticos. Os modos empíricos obtidos através da expansão de Karhunen-Loève são, então, usados na formulação de um modelo de ordem reduzida, através do método de Galerkin, e o desempenho deste novo modelo investigado. / [en] This dissertation has the main objetive of studying applications of the Karhunen-Loève expansion or decomposition in structural dynamics. This technique consists basically in obtaining a linear decomposition of the dynamic response of a general system represented by a stochastic vector field. It has the important property of optimality, meaning that for a given number of modes, no other linear decomposition is able of better representing this response. This information compression capability characterizes this decomposition as powerful tool for the construction of reduced-order models of mechanical systems in general. Particularly, this work deals with structural dyamics problems where its application is still quite new. Initially, the main hypothesis necessary to the application of the Karhunen-Loève expansion are presented, as well as two existing techniques for its implementation that have different domains of use. Special attention is payed to the relation between empirical eigenmodes provided by the expansion and mode shapes intrinsic to linear vibrating systems, both discrete and continuous, exemplified by a bidimensional truss and a rectangular plate. Furthermore, the advantages and disadvantages of using this expansion as an alternative tool for classical modal analysis are discussed. As a nonlinear application, the study of a vibroimpact system consisting of a cantilever beam whose transversal displacement is constrained by two elastic barriers is presented. The empirical eigenmodes provided by the Karhunen-Loève expansion are then used to formulate a reduced-order model through Galerkin projection and the performance of this new model is investigated.
33

Propagation d'incertitudes en CEM. Application à l'analyse de fiabilité et de sensibilité de lignes de transmission et d'antennes / Uncertainty propagation in EMC. Application to reliability and sensitivity analyzes of transmission lines and antennas

Kouassi, Attibaud 18 December 2017 (has links)
De nos jours, la plupart des analyses CEM d’équipements et systèmes électroniques sont basées sur des approches quasi-déterministes dans lesquelles les paramètres internes et externes des modèles sont supposés parfaitement connus et où les incertitudes les affectant sont prises en compte sur les réponses par le biais de marges de sécurité importantes. Or, l’inconvénient de telles approches est qu’elles sont non seulement trop conservatives, mais en outre totalement inadaptées à certaines situations, notamment lorsque l’objectif de l’étude impose de prendre en compte le caractère aléatoire de ces paramètres via des modélisations stochastiques appropriées de type variables, processus ou champs aléatoires. Cette approche probabiliste a fait l’objet ces dernières années d’un certain nombre de recherches en CEM, tant au plan national qu’au plan international. Le travail présenté dans cette thèse est une contribution à ces recherches et a un double objectif : (1) développer et mettre en œuvre une méthodologie probabiliste et ses outils numériques d’accompagnement pour l’évaluation de la fiabilité et l’analyse sensibilité des équipements et systèmes électroniques en se limitant à des modélisations stochastiques par variables aléatoires ; (2) étendre cette étude au cas des modélisations stochastiques par processus et champs aléatoires dans le cadre d’une analyse prospective basée sur la résolution de l’équation aux dérivées partielles des télégraphistes à coefficients aléatoires.L’approche probabiliste mentionnée au point (1) consiste à évaluer la probabilité de défaillance d’un équipement ou d’un système électronique vis-à-vis d’un critère de défaillance donné et à déterminer l’importance relative de chacun des paramètres aléatoires en présence. Les différentes méthodes retenues à cette fin sont des adaptations à la CEM de méthodes développées dans le domaine de la mécanique aléatoire pour les études de propagation d’incertitudes. Pour le calcul des probabilités de défaillance, deux grandes catégories de méthodes sont proposées : celles basées sur une approximation de la fonction d’état-limite relative au critère de défaillance et les méthodes de Monte-Carlo basées sur la simulation numérique des variables aléatoires du modèle et l’estimation statistique des probabilités cibles. Pour l’analyse de sensibilité, une approche locale et une approche globale sont retenues. Ces différentes méthodes sont d’abord testées sur des applications académiques afin de mettre en lumière leur intérêt dans le domaine de la CEM. Elles sont ensuite appliquées à des problèmes de lignes de transmission et d’antennes plus représentatifs de la réalité.Dans l’analyse prospective, des méthodes de résolution avancées sont proposées, basées sur des techniques spectrales requérant les développements en chaos polynomiaux et de Karhunen-Loève des processus et champs aléatoires présents dans les modèles. Ces méthodes ont fait l’objet de tests numériques encourageant, mais qui ne sont pas présentés dans le rapport de thèse, faute de temps pour leur analyse complète. / Nowadays, most EMC analyzes of electronic or electrical devices are based on deterministic approaches for which the internal and external models’ parameters are supposed to be known and the uncertainties on models’ parameters are taken into account on the outputs by defining very large security margins. But, the disadvantage of such approaches is their conservative character and their limitation when dealing with the parameters’ uncertainties using appropriate stochastic modeling (via random variables, processes or fields) is required in agreement with the goal of the study. In the recent years, this probabilistic approach has been the subject of several researches in the EMC community. The work presented here is a contribution to these researches and has a dual purpose : (1) develop a probabilistic methodology and implement the associated numerical tools for the reliability and sensitivity analyzes of the electronic devices and systems, assuming stochastic modeling via random variables; (2) extend this study to stochastic modeling using random processes and random fields through a prospective analysis based on the resolution of the telegrapher equations (partial derivative equations) with random coefficients. The first mentioned probabilistic approach consists in computing the failure probability of an electronic device or system according to a given criteria and in determining the relative importance of each considered random parameter. The methods chosen for this purpose are adaptations to the EMC framework of methods developed in the structural mechanics community for uncertainty propagation studies. The failure probabilities computation is performed using two type of methods: the ones based on an approximation of the limit state function associated to the failure criteria, and the Monte Carlo methods based on the simulation of the model’s random variables and the statistical estimation of the target failure probabilities. In the case of the sensitivity analysis, a local approach and a global approach are retained. All these methods are firstly applied to academic EMC problems in order to illustrate their interest in the EMC field. Next, they are applied to transmission lines problems and antennas problems closer to reality. In the prospective analysis, more advanced resolution methods are proposed. They are based on spectral approaches requiring the polynomial chaos expansions and the Karhunen-Loève expansions of random processes and random fields considered in the models. Although the first numerical tests of these methods have been hopeful, they are not presented here because of lack of time for a complete analysis.
34

Puissance asymptotique des tests non paramétriques d'ajustement du type Cramer-Von Mises

Boukili Makhoukhi, Mohammed 21 June 2007 (has links) (PDF)
L'analyse statistique, prise au sens large, est centrée sur la description, et, lorsque les circonstances le permettent, la modélisation quantitative des phénomènes observés, pour peu que ces derniers possèdent une part d'incertitude, et donc, qu'ils soient soumis aux lois du hasard. Dans cette activité scientifique, le plus grand soin doit être apporté à la validation des hypothèses de modélisation, nécessaires à l'interprétation des résultats. Ce principe général s'applique d'ailleurs à toutes les sciences expérimentales, et tout aussi bien aux sciences humaines (en psychologie), qu'en économie, et dans bien d'autres disciplines. Une théorie scientifique repose, au départ, sur des hypothèses de modélisation, qui sont ensuite soumises à l'épreuve de l'expérimentation. Celle-ci est basée sur le recueil de données, dont il est nécessaire de décider la nature, compatible ou non, avec les modèles choisis, aboutissant, soit au rejet, soit à l'acceptation, de ces derniers. La statistique a développé, dans ce but, une technologie basée sur les tests d'hypothèses, dont nous nous abstiendrons de discuter dans mon mémoire de thèse les bases et les fondements. Dans cette thèse, nous avons abordé l'étude de certains tests d'ajustement (dits, en Anglais, tests of fit"), de nature paramétrique et non paramétrique. Les aspects techniques de ces tests d'hypothèses ont été abordés, dans le contexte particulier de notre étude pour les tests de type Cramer-Von Mises. On ne manquera pas de citer l'approche initialement utilisée pour les tests de type Kolmogorov-Smirnov. Enfin, l'ouvrage de Nikitin était une référence de base particulièrement adaptée à la nature de notre recherche. L'objectif principal de la thèse est d'évaluer la puissance asymptotique de certains tests d'ajustement, relevant de la catégorie générale des tests de Cramer-Von Mises. Nous avons évalué cette puissance, relativement à des suites convenables d'alternatives locales. Notre méthode utilise les développements de Karhunen-Loève d'un pont brownien pondéré. Notre travail avait pour objet secondaire de compléter des recherches récentes de P.Deheuvels et G.Martynov, qui ont donné l'expression des fonctions propres et valeurs propres des développements de Karhunen-Loève de certains ponts browniens pondérés à l'aide de fonctions de Bessel. Dans le premier temps, nous avons exposé les fondements des développements de Karhunen-Loève [D.K.L], ainsi que les applications qui en découlent en probabilités et statistiques. Le deuxième paragraphe de cette thèse a été consacré à un exposé de la composante de la théorie des tests d'hypothèses adaptée à la suite de notre mémoire. Dans ce même paragraphe, nous montrons l'intérêt qu'apporte une connaissance explicite des composantes d'un développement de Karhunen-Loève, en vue de l'évaluation de la puissance de tests d'ajustement basés sur les statistiques de type Cramer-Von Mises qui sont liées à ce D.K.L.
35

A Study Of Natural Convection In Molten Metal Under A Magnetic Field

Guray, Ersan 01 September 2006 (has links) (PDF)
The interaction between thermal convection and magnetic field is of interest in geophysical and astrophysical problems as well as in metallurgical processes such as casting or crystallization. A magnetic field may act in such a way to damp the convective velocity field in the melt or to reorganize the flow aligned with the magnetic field. This ability to manipulate the flow field is of technological importance in industrial processes. In this work, a direct numerical simulation of three-dimensional Boussinesq convection in a horizontal layer of electrically conducting fluid confined between two perfectly conducting horizontal plates heated from below in a gravitational and magnetic field is performed using a spectral element method. Periodic boundary conditions are assumed in the horizontal directions. The numerical model is then used to study the effects of imposing magnetic field. Finally, a low dimensional representation scheme is presented based on the Karhunen-Loeve approach.
36

New Algorithms for Uncertainty Quantification and Nonlinear Estimation of Stochastic Dynamical Systems

Dutta, Parikshit 2011 August 1900 (has links)
Recently there has been growing interest to characterize and reduce uncertainty in stochastic dynamical systems. This drive arises out of need to manage uncertainty in complex, high dimensional physical systems. Traditional techniques of uncertainty quantification (UQ) use local linearization of dynamics and assumes Gaussian probability evolution. But several difficulties arise when these UQ models are applied to real world problems, which, generally are nonlinear in nature. Hence, to improve performance, robust algorithms, which can work efficiently in a nonlinear non-Gaussian setting are desired. The main focus of this dissertation is to develop UQ algorithms for nonlinear systems, where uncertainty evolves in a non-Gaussian manner. The algorithms developed are then applied to state estimation of real-world systems. The first part of the dissertation focuses on using polynomial chaos (PC) for uncertainty propagation, and then achieving the estimation task by the use of higher order moment updates and Bayes rule. The second part mainly deals with Frobenius-Perron (FP) operator theory, how it can be used to propagate uncertainty in dynamical systems, and then using it to estimate states by the use of Bayesian update. Finally, a method to represent the process noise in a stochastic dynamical system using a nite term Karhunen-Loeve (KL) expansion is proposed. The uncertainty in the resulting approximated system is propagated using FP operator. The performance of the PC based estimation algorithms were compared with extended Kalman filter (EKF) and unscented Kalman filter (UKF), and the FP operator based techniques were compared with particle filters, when applied to a duffing oscillator system and hypersonic reentry of a vehicle in the atmosphere of Mars. It was found that the accuracy of the PC based estimators is higher than EKF or UKF and the FP operator based estimators were computationally superior to the particle filtering algorithms.
37

Uma análise funcional da dinâmica de densidades de retornos financeiros

Horta, Eduardo de Oliveira January 2011 (has links)
Uma correta especificação das funções densidade de probabilidade (fdp’s) de retornos de ativos é um tópico dos mais relevantes na literatura de modelagem econométrica de dados financeiros. A presente dissertação propõe-se a oferecer, neste âmbito, uma abordagem distinta, através de uma aplicação da metodologia desenvolvida em Bathia et al. (2010) a dados intradiários do índice bovespa. Esta abordagem consiste em focar a análise diretamente sobre a estrutura dinâmica das fdp’s dos retornos, enxergando-as como uma sequência de variáveis aleatórias que tomam valores em um espaço de funções. A dependência serial existente entre essas curvas permite que se obtenham estimativas filtradas das fdp’s, e mesmo que se façam previsões sobre densidades de períodos subsequentes à amostra. No artigo que integra esta dissertação, onde é feita a mencionada aplicação, encontrou-se evidência de que o comportamento dinâmico das fdp’s dos retornos do índice bovespa se reduz a um processo bidimensional, o qual é bem representado por um modelo var(1) e cuja dinâmica afeta a dispersão e a assimetria das distribuições no suceder dos dias. Ademais, utilizando-se de subamostras, construíram-se previsões um passo à frente para essas fdp’s, e avaliaram-se essas previsões de acordo com métricas apropriadas. / Adequate specification of the probability density functions (pdf’s) of asset returns is a most relevant topic in econometric modelling of financial data. This dissertation aims to provide a distinct approach on that matter, through applying the methodology developed in Bathia et al. (2010) to intraday bovespa index data. This approach consists in focusing the analysis directly on the dynamic structure of returns fdp’s, seeing them as a sequence of function-valued random variables. The serial dependence of these curves allows one to obtain filtered estimates of the pdf’s, and even to forecast upcoming densities. In the paper contained into this dissertation, evidence is found that the dynamic structure of the bovespa index returns pdf’s reduces to a R2-valued process, which is well represented by a var(1) model, and whose dynamics affect the dispersion and symmetry of the distributions at each day. Moreover, one-step-ahead forecasts of upcoming pdf’s were constructed through subsamples and evaluated according to appropriate metrics.
38

Uma análise funcional da dinâmica de densidades de retornos financeiros

Horta, Eduardo de Oliveira January 2011 (has links)
Uma correta especificação das funções densidade de probabilidade (fdp’s) de retornos de ativos é um tópico dos mais relevantes na literatura de modelagem econométrica de dados financeiros. A presente dissertação propõe-se a oferecer, neste âmbito, uma abordagem distinta, através de uma aplicação da metodologia desenvolvida em Bathia et al. (2010) a dados intradiários do índice bovespa. Esta abordagem consiste em focar a análise diretamente sobre a estrutura dinâmica das fdp’s dos retornos, enxergando-as como uma sequência de variáveis aleatórias que tomam valores em um espaço de funções. A dependência serial existente entre essas curvas permite que se obtenham estimativas filtradas das fdp’s, e mesmo que se façam previsões sobre densidades de períodos subsequentes à amostra. No artigo que integra esta dissertação, onde é feita a mencionada aplicação, encontrou-se evidência de que o comportamento dinâmico das fdp’s dos retornos do índice bovespa se reduz a um processo bidimensional, o qual é bem representado por um modelo var(1) e cuja dinâmica afeta a dispersão e a assimetria das distribuições no suceder dos dias. Ademais, utilizando-se de subamostras, construíram-se previsões um passo à frente para essas fdp’s, e avaliaram-se essas previsões de acordo com métricas apropriadas. / Adequate specification of the probability density functions (pdf’s) of asset returns is a most relevant topic in econometric modelling of financial data. This dissertation aims to provide a distinct approach on that matter, through applying the methodology developed in Bathia et al. (2010) to intraday bovespa index data. This approach consists in focusing the analysis directly on the dynamic structure of returns fdp’s, seeing them as a sequence of function-valued random variables. The serial dependence of these curves allows one to obtain filtered estimates of the pdf’s, and even to forecast upcoming densities. In the paper contained into this dissertation, evidence is found that the dynamic structure of the bovespa index returns pdf’s reduces to a R2-valued process, which is well represented by a var(1) model, and whose dynamics affect the dispersion and symmetry of the distributions at each day. Moreover, one-step-ahead forecasts of upcoming pdf’s were constructed through subsamples and evaluated according to appropriate metrics.
39

Uma análise funcional da dinâmica de densidades de retornos financeiros

Horta, Eduardo de Oliveira January 2011 (has links)
Uma correta especificação das funções densidade de probabilidade (fdp’s) de retornos de ativos é um tópico dos mais relevantes na literatura de modelagem econométrica de dados financeiros. A presente dissertação propõe-se a oferecer, neste âmbito, uma abordagem distinta, através de uma aplicação da metodologia desenvolvida em Bathia et al. (2010) a dados intradiários do índice bovespa. Esta abordagem consiste em focar a análise diretamente sobre a estrutura dinâmica das fdp’s dos retornos, enxergando-as como uma sequência de variáveis aleatórias que tomam valores em um espaço de funções. A dependência serial existente entre essas curvas permite que se obtenham estimativas filtradas das fdp’s, e mesmo que se façam previsões sobre densidades de períodos subsequentes à amostra. No artigo que integra esta dissertação, onde é feita a mencionada aplicação, encontrou-se evidência de que o comportamento dinâmico das fdp’s dos retornos do índice bovespa se reduz a um processo bidimensional, o qual é bem representado por um modelo var(1) e cuja dinâmica afeta a dispersão e a assimetria das distribuições no suceder dos dias. Ademais, utilizando-se de subamostras, construíram-se previsões um passo à frente para essas fdp’s, e avaliaram-se essas previsões de acordo com métricas apropriadas. / Adequate specification of the probability density functions (pdf’s) of asset returns is a most relevant topic in econometric modelling of financial data. This dissertation aims to provide a distinct approach on that matter, through applying the methodology developed in Bathia et al. (2010) to intraday bovespa index data. This approach consists in focusing the analysis directly on the dynamic structure of returns fdp’s, seeing them as a sequence of function-valued random variables. The serial dependence of these curves allows one to obtain filtered estimates of the pdf’s, and even to forecast upcoming densities. In the paper contained into this dissertation, evidence is found that the dynamic structure of the bovespa index returns pdf’s reduces to a R2-valued process, which is well represented by a var(1) model, and whose dynamics affect the dispersion and symmetry of the distributions at each day. Moreover, one-step-ahead forecasts of upcoming pdf’s were constructed through subsamples and evaluated according to appropriate metrics.
40

Numerical Methods For Solving The Eigenvalue Problem Involved In The Karhunen-Loeve Decomposition

Choudhary, Shalu 02 1900 (has links) (PDF)
In structural analysis and design it is important to consider the effects of uncertainties in loading and material properties in a rational way. Uncertainty in material properties such as heterogeneity in elastic and mass properties can be modeled as a random field. For computational purpose, it is essential to discretize and represent the random field. For a field with known second order statistics, such a representation can be achieved by Karhunen-Lo`eve (KL) expansion. Accordingly, the random field is represented in a truncated series expansion using a few eigenvalues and associated eigenfunctions of the covariance function, and corresponding random coefficients. The eigenvalues and eigenfunctions of the covariance kernel are obtained by solving a second order Fredholm integral equation. A closed-form solution for the integral equation, especially for arbitrary domains, may not always be available. Therefore an approximate solution is sought. While finding an approximate solution, it is important to consider both accuracy of the solution and the cost of computing the solution. This work is focused on exploring a few numerical methods for estimating the solution of this integral equation. Three different methods:(i)using finite element bases(Method1),(ii) mid-point approximation(Method2), and(iii)by the Nystr¨om method(Method3), are implemented and numerically studied. The methods and results are compared in terms of accuracy, computational cost, and difficulty of implementation. In the first method an eigenfunction is first represented in a linear combination of a set of finite element bases. The resulting error in the integral equation is then minimized in the Galerkinsense, which results in a generalized matrix eigenvalue problem. In the second method, the domain is partitioned into a finite number of subdomains. The covariance function is discretized by approximating its value over each subdomain locally, and thereby the integral equation is transformed to a matrix eigenvalue problem. In the third method the Fredholm integral equation is approximated by a quadrature rule, which also results in a matrix eigenvalue problem. The methods and results are compared in terms of accuracy, computational cost, and difficulty of implementation. The first part of the numerical study involves comparing these three methods. This numerical study is first done in one dimensional domain. Then for study in two dimensions a simple rectangular domain(referred toasDomain1)is taken with an uncertain material property modeled as a Gaussian random field. For the chosen covariance model and domain, the analytical solutions are known, which allows verifying the accuracy of the numerical solutions. There by these three numerical methods are studied and are compared for a chosen target accuracy and different correlation lengths of the random field. It was observed that Method 2 and Method 3 are much faster than the Method 1. On the other hand, for Method 2 and 3, additional cost for discretizing the domain into nodes should be considered whereas for a mechanics-related problem, Method 1 can use the available finite element mesh used for solving the mechanics problem. The second part of the work focuses on studying on the effect of the geometry of the model on realizations of the random field. The objective of the study is to see the possibility of generating the random field for a complicated domain from the KL expansion for a simpler domain. For this purpose, two KL decompositions are obtained: one on the Domain1, and another on the same rectangular domain modified with a rectangular hole (referredtoasDomain2) inside it. The random process is generated and realizations are compared. It was observed from the studies that probability density functions at the nodes on both the domains, that is, on Domain 1 and Domain 2, are similar. This observation leads to a possibility that a complicated domain can be replaced by a corresponding simpler domain, thereby reducing the computational cost.

Page generated in 0.0468 seconds