Spelling suggestions: "subject:"2analysis (mathematics)"" "subject:"2analysis (amathematics)""
201 |
Error Propagation and Metamodeling for a Fidelity Tradeoff Capability in Complex Systems DesignMcDonald, Robert Alan 07 July 2006 (has links)
Complex man-made systems are ubiquitous in modern technological society. The national air transportation infrastructure and the aircraft that operate within it, the highways stretching coast-to-coast and the vehicles that travel on them, and global communications networks and the computers that make them possible are all complex systems.
It is impossible to fully validate a systems analysis or a design process. Systems are too large, complex, and expensive to build test and validation articles. Furthermore, the operating conditions throughout the life cycle of a system are impossible to predict and control for a validation experiment.
Error is introduced at every point in a complex systems design process. Every error source propagates through the complex system in the same way information propagates, feedforward, feedback, and coupling are all present with error.
As with error propagation through a single analysis, error sources grow and decay when propagated through a complex system. These behaviors are made more complex by the complex interactions of a complete system. This complication and the loss of intuition that accompanies it make proper error propagation calculations even more important to aid the decision maker.
Error allocation and fidelity trade decisions answer questions like: Is the fidelity of a complex systems analysis adequate, or is an improvement needed, and how is that improvement best achieved? Where should limited resources be invested for the improvement of fidelity? How does knowledge of the imperfection of a model impact design decisions based on the model and the certainty of the performance of a particular design?
In this research, a fidelity trade environment was conceived, formulated, developed, and demonstrated. This development relied on the advancement of enabling techniques including error propagation, metamodeling, and information management. A notional transport aircraft is modeled in the fidelity trade environment. Using the environment, the designer is able to make design decisions while considering error and he is able to make decisions regarding required tool fidelity as the design problem continues. These decisions could not be made in a quantitative manner before the fidelity trade environment was developed.
|
202 |
Coupled Space-Angle Adaptivity and Goal-Oriented Error Control for Radiation Transport CalculationsPark, HyeongKae 15 November 2006 (has links)
This research is concerned with the self-adaptive numerical solution of the neutral particle radiation transport problem. Radiation transport is an extremely challenging computational problem since the governing equation is seven-dimensional (3 in space, 2 in direction, 1 in energy, and 1 in time) with a high degree of coupling between these variables. If not careful, this relatively large number of independent variables when discretized can potentially lead to sets of linear equations of intractable size. Though parallel computing has allowed the solution of very large problems, available computational resources will always be finite due to the fact that ever more sophisticated multiphysics models are being demanded by industry. There is thus the pressing requirement to optimize the discretizations so as to minimize the effort and maximize the accuracy.
One way to achieve this goal is through adaptive phase-space refinement. Unfortunately, the quality of discretization (and its solution) is, in general, not known a priori; accurate error estimates can only be attained via the a posteriori error analysis. In particular, in the context of the finite element method, the a posteriori error analysis provides a rigorous error bound. The main difficulty in applying a well-established a posteriori error analysis and subsequent adaptive refinement in the context of radiation transport is the strong coupling between spatial and angular variables. This research attempts to address this issue within the context of the second-order, even-parity form of the transport equation discretized with the finite-element spherical harmonics method.
The objective of this thesis is to develop a posteriori error analysis in a coupled space-angle framework and an efficient adaptive algorithm. Moreover, the mesh refinement strategy which is tuned for minimizing the error in the target engineering output has been developed by employing the dual argument of the problem. This numerical framework has been implemented in the general-purpose neutral particle code EVENT for assessment.
|
203 |
Error estimation and grid adaptation for functional outputs using discrete-adjoint sensitivity analysisBalsubramanian, Ravishankar. January 2002 (has links)
Thesis (M.S.)--Mississippi State University. Department of Computational Engineering. / Title from title screen. Includes bibliographical references.
|
204 |
Numerical study of error propagation in Monte Carlo depletion simulationsWyant, Timothy Joseph 26 June 2012 (has links)
Improving computer technology and the desire to more accurately model the heterogeneity of the nuclear reactor environment have made the use
of Monte Carlo depletion codes more attractive in recent years, and feasible (if not practical) even for 3-D depletion simulation. However, in this case statistical uncertainty is combined with error propagating through the calculation from previous steps. In an effort to understand this error propagation, four test problems were developed to test error propagation in
the fuel assembly and core domains. Three test cases modeled and tracked individual fuel pins in four 17x17 PWR fuel assemblies. A fourth problem
modeled a well-characterized 330MWe nuclear reactor core. By changing the code's initial random number seed, the data produced by a series of 19 replica runs of each test case was used to investigate the true and apparent variance in k-eff, pin powers, and number densities of several isotopes. While this study does not intend to develop a predictive model for error
propagation, it is hoped that its results can help to identify some common regularities in the behavior of uncertainty in several key parameters.
|
205 |
DIRECT, analise intervalar e otimização global irrestrita / DIRECT, interval analysis and unconstrained global optimizationGonçalves, Douglas Soares, 1982- 13 August 2018 (has links)
Orientador: Marcia Aparecida Gomes Ruggiero / Dissertação (mestrado) - Universidade Estadual de Campinas, Instituto de Matematica, Estatistica e Computação Cientifica / Made available in DSpace on 2018-08-13T09:36:27Z (GMT). No. of bitstreams: 1
Goncalves_DouglasSoares_M.pdf: 1768338 bytes, checksum: c4cc7b4b0fd9fd75e8b01510162d7662 (MD5)
Previous issue date: 2009 / Resumo: Neste trabalho analisamos dois métodos para otimização global irrestrita: DIRECT, um método tipo branch-and-select, baseado em otimização Lipschitziana, com um critério especial de seleção que balanceia a ênfase entre busca local e global; e um método tipo branch-and-bound empregando as mais recentes técnicas em análise intervalar, junto com back-boxing e busca local, para acelerar o processo de convergência. Variações do método branch-and-bound intervalar, e combinaçções deste com as idéias do DIRECT foram formuladas e implementadas. A aplicação a problemas clássicos encontrados na literatura mostrou que as estratégias adotadas contribuíram para melhorar o desempenho dos algoritmos. / Abstract: In this work we analyze two unconstrained global optimization methods: DIRECT, a branch-and-select method, based on Lipschitzian optimization, with a special selection criterion that balances the emphasis between local and global search; and a branch-and-bound method incorporating the state of art interval analysis techniques, with back-boxing and local search, to speed up the convergence process. Interval branch-and-bound method variations, and combinations of them with the ideas of DIRECT were proposed and implemented. Application to classical problems found in literature, shows that the adopted strategies contribute to improve the performance of the algorithms. / Mestrado / Otimização / Mestre em Matemática Aplicada
|
206 |
Estatística e a teoria de conjuntos fuzzy / Statistic and fuzzy set theoryGonzález Campos, José Alejandro, 1979- 03 June 2015 (has links)
Orientadores: Víctor Hugo Lachos Dávila, Alexandre Galvão Patriota / Tese (doutorado) - Universidade Estadual de Campinas, Instituto de Matemática Estatística e Computação Científica / Made available in DSpace on 2018-08-27T08:57:41Z (GMT). No. of bitstreams: 1
GonzalezCampos_JoseAlejandro_D.pdf: 1384046 bytes, checksum: 50ce1df9dcb6387479bcfdd819262218 (MD5)
Previous issue date: 2015 / Resumo: A teoria de conjuntos fuzzy é uma teoria nova introduzida por Zadeh no ano 1965. Estes últimos anos tem tido uma frutífera massificação, atingindo variados campos da ciência. Neste trabalho são distinguidas três dimensões: A teoria de conjuntos fuzzy de maneira pura, conexões da estatística e a teoria dos conjuntos fuzzy (interpretação e visualização) e finalmente a estatística aplicada a dados fuzzy. São apresentadas as definições elementares da teoria de conjuntos fuzzy, tais como: número fuzzy, core e conjuntos fuzzy normais, de maneira a se fazer a tese autocontida. Baseada na primeira dimensão foi definida uma nova forma de ordem nos números fuzzy LR-Type, caracterizada pela sua simplicidade nos cálculos onde a ordem dos números reais fica como uma situação particular quando estes são considerados números fuzzy. Esta proposta supera muitas das limitações de outras propostas de ordem, como a indeterminação e indefinição. A definição de uma ordem permitirá obter ferramentas estatísticas como a mediana e medidas de variabilidade. Na segunda dimensão é apresentada uma nova ferramenta de interpretação das regiões de confiança depois de observada a amostra. É definida uma função de membership que representa de maneira fuzzy o espaço paramétrico dependendo de cada região de confiança. Também é apresentada uma nova forma de visualização de uma sequencia infinita de regiões de confiança. Finalmente, na terceira dimensão é estudada a generalização do estimador de Kaplan-Meier na situação que os tempos de vida são considerados como números fuzzy, abrindo uma linha de pesquisa baseado nas suas propriedades assintóticas. Nesta seção é utilizado um exemplo típico de analises de sobrevivência. Este trabalho de tese apresenta as bases teóricas elementares para dar início a uma nova linha de pesquisa, atendendo a nossa natureza humana e tentar escapar de supostos platônicos / Abstract: The fuzzy sets theory is a new theory introduced by Zadeh in 1965. These past years have been a fruitful massification, reaching diverse fields of science. This work distinguishes three dimensions: A fuzzy set theory in a pure way, connections statistics and the theory of fuzzy sets (interpretation and visualization) and finally applied statistics to fuzzy data. The basic definitions of the fuzzy sets theory are presented, such as fuzzy number, core and normal fuzzy sets, the way to make the thesis self-contained. Based on the first dimension was defined a new order in the LR-type fuzzy numbers. It is characterized by its simplicity in calculations where the order of the real numbers is a particular situation when they are considered fuzzy numbers. This proposal overcomes many of the limitations of other order proposed, such as the indetermination and indefiniteness. The definition of an order will allow to get statistical tools such as median and variability measures. In the second dimension is presented a new tool for the interpretation of confidence regions after the sample was observed. Is defined a function of membership that representing the parametric space of fuzzy way depending of each confidence region. Also is presented a new form of visualization of an infinite sequence of confidence regions. Finally, in the third dimension is studied the generalization of the Kaplan-Meier estimator, in which the lifetime is considered as fuzzy number, opening a line of research around their asymptotic properties. In this section a typical example of survival analysis is used. This thesis work presents the basic theoretical foundations to begin a new line of research, given our human nature and to try to escape from Platonic assumptions / Doutorado / Estatistica / Doutor em Estatística
|
207 |
REAL-TIME RECONCILIATION OF COAL COMBUSTOR DATAMontgomery, Roger Lee January 1982 (has links)
No description available.
|
208 |
On local constraints and regularity of PDE in electromagnetics : applications to hybrid imaging inverse problemsAlberti, Giovanni S. January 2014 (has links)
The first contribution of this thesis is a new regularity theorem for time harmonic Maxwell's equations with less than Lipschitz complex anisotropic coefficients. By using the L<sup>p</sup> theory for elliptic equations, it is possible to prove H<sup>1</sup> and Hölder regularity results, provided that the coefficients are W<sup>1,p</sup> for some p = 3. This improves previous regularity results, where the assumption W<sup>1,∞</sup> for the coefficients was believed to be optimal. The method can be easily extended to the case of bi-anisotropic materials, for which a separate approach turns out to be unnecessary. The second focus of this work is the boundary control of the Helmholtz and Maxwell equations to enforce local constraints inside the domain. More precisely, we look for suitable boundary conditions such that the corresponding solutions and their derivatives satisfy certain local non-zero constraints. Complex geometric optics solutions can be used to construct such illuminations, but are impractical for several reasons. We propose a constructive approach to this problem based on the use of multiple frequencies. The suitable boundary conditions are explicitly constructed and give the desired constraints, provided that a finite number of frequencies, given a priori, are chosen in a fixed range. This method is based on the holomorphicity of the solutions with respect to the frequency and on the regularity theory for the PDE under consideration. This theory finds applications to several hybrid imaging inverse problems, where the unknown coefficients have to be imaged from internal measurements. In order to perform the reconstruction, we often need to find suitable boundary conditions such that the corresponding solutions satisfy certain non-zero constraints, depending on the particular problem under consideration. The multiple frequency approach introduced in this thesis represents a valid alternative to the use of complex geometric optics solutions to construct such boundary conditions. Several examples are discussed.
|
209 |
A fictitious domain approach for hybrid simulations of eukaryotic chemotaxisSeguis, Jean-Charles January 2013 (has links)
Chemotaxis, the phenomenon through which cells respond to external chemical signals, is one of the most important and universally observable in nature. It has been the object of considerable modelling effort in the last decades. The models for chemotaxis available in the literature cannot reconcile the dynamics of external chemical signals and the intracellular signalling pathways leading to the response of the cells. The reason is that models used for cells do not contain the distinction between the extracellular and intracellular domains. The work presented in this dissertation intends to resolve this issue. We set up a numerical hybrid simulation framework containing such description and enabling the coupling of models for phenomena occurring at extracellular and intracellular levels. Mathematically, this is achieved by the use of the fictitious domain method for finite elements, allowing the simulation of partial differential equations on evolving domains. In order to make the modelling of the membrane binding of chemical signals possible, we derive a suitable fictitious domain method for Robin boundary elliptic problems. We also display ways to minimise the computational cost of such simulation by deriving a suitable preconditioner for the linear systems resulting from the Robin fictitious domain method, as well as an efficient algorithm to compute fictitious domain specific linear operators. Lastly, we discuss the use of a simpler cell model from the literature and match it with our own model. Our numerical experiments show the relevance of the matching, as well as the stability and accuracy of the numerical scheme presented in the thesis.
|
210 |
Efficient simulation of cardiac electrical propagation using adaptive high-order finite elementsArthurs, Christopher J. January 2013 (has links)
This thesis investigates the high-order hierarchical finite element method, also known as the finite element p-version, as a computationally-efficient technique for generating numerical solutions to the cardiac monodomain equation. We first present it as a uniform-order method, and through an a priori error bound we explain why the associated cardiac cell model must be thought of as a PDE and approximated to high-order in order to obtain the accuracy that the p-version is capable of. We perform simulations demonstrating that the achieved error agrees very well with the a priori error bound. Further, in terms of solution accuracy for time taken to solve the linear system that arises in the finite element discretisation, it is more efficient that the state-of-the-art piecewise linear finite element method. We show that piecewise linear FEM actually introduces quite significant amounts of error into the numerical approximations, particularly in the direction perpendicular to the cardiac fibres with physiological conductivity values, and that without resorting to extremely fine meshes with elements considerably smaller than 70 micrometres, we can not use it to obtain high-accuracy solutions. In contrast, the p-version can produce extremely high accuracy solutions on meshes with elements around 300 micrometres in diameter with these conductivities. Noting that most of the numerical error is due to under-resolving the wave-front in the transmembrane potential, we also construct an adaptive high-order scheme which controls the error locally in each element by adjusting the finite element polynomial basis degree using an analytically-derived a posteriori error estimation procedure. This naturally tracks the location of the wave-front, concentrating computational effort where it is needed most and increasing computational efficiency. The scheme can be controlled by a user-defined error tolerance parameter, which sets the target error within each element as a proportion of the local magnitude of the solution as measured in the H^1 norm. This numerical scheme is tested on a variety of problems in one, two and three dimensions, and is shown to provide excellent error control properties and to be likely capable of boosting efficiency in cardiac simulation by an order of magnitude. The thesis amounts to a proof-of-concept of the increased efficiency in solving the linear system using adaptive high-order finite elements when performing single-thread cardiac simulation, and indicates that the performance of the method should be investigated in parallel, where it can also be expected to provide considerable improvement. In general, the selection of a suitable preconditioner is key to ensuring efficiency; we make use of a variety of different possibilities, including one which can be expected to scale very well in parallel, meaning that this is an excellent candidate method for increasing the efficiency of cardiac simulation using high-performance computing facilities.
|
Page generated in 0.0853 seconds