Spelling suggestions: "subject:"[een] REFINEMENT"" "subject:"[enn] REFINEMENT""
261 |
Scalable, adaptive methods for forward and inverse problems in continental-scale ice sheet modelingIsaac, Tobin Gregory 18 September 2015 (has links)
Projecting the ice sheets' contribution to sea-level rise is difficult because of the complexity of accurately modeling ice sheet dynamics for the full polar ice sheets, because of the uncertainty in key, unobservable parameters governing those dynamics, and because quantifying the uncertainty in projections is necessary when determining the confidence to place in them. This work presents the formulation and solution of the Bayesian inverse problem of inferring, from observations, a probability distribution for the basal sliding parameter field beneath the Antarctic ice sheet. The basal sliding parameter is used within a high-fidelity nonlinear Stokes model of ice sheet dynamics. This model maps the parameters "forward" onto a velocity field that is compared against observations. Due to the continental-scale of the model, both the parameter field and the state variables of the forward problem have a large number of degrees of freedom: we consider discretizations in which the parameter has more than 1 million degrees of freedom. The Bayesian inverse problem is thus to characterize an implicitly defined distribution in a high-dimensional space. This is a computationally demanding problem that requires scalable and efficient numerical methods be used throughout: in discretizing the forward model; in solving the resulting nonlinear equations; in solving the Bayesian inverse problem; and in propagating the uncertainty encoded in the posterior distribution of the inverse problem forward onto important quantities of interest. To address discretization, a hybrid parallel adaptive mesh refinement format is designed and implemented for ice sheets that is suited to the large width-to-height aspect ratios of the polar ice sheets. An efficient solver for the nonlinear Stokes equations is designed for high-order, stable, mixed finite-element discretizations on these adaptively refined meshes. A Gaussian approximation of the posterior distribution of parameters is defined, whose mean and covariance can be efficiently and scalably computed using adjoint-based methods from PDE-constrained optimization. Using a low-rank approximation of the covariance of this distribution, the covariance of the parameter is pushed forward onto quantities of interest.
|
262 |
With a new refinement paradigm towards anisotropic adaptive FEM on triangular meshesSchneider, Rene 15 October 2013 (has links) (PDF)
Adaptive anisotropic refinement of finite element meshes allows to reduce the computational effort required to achieve a specified accuracy of the solution of a PDE problem.
We present a new approach to adaptive refinement and demonstrate that this allows to construct algorithms which generate very flexible and efficient anisotropically refined meshes, even improving the convergence order compared to adaptive isotropic refinement if the problem permits.
|
263 |
Linking welfare and quality of scientific output in cynomolgus macaques (Macaca fascicularis) used for regulatory toxicologyTasker, Louisa January 2012 (has links)
Cynomolgus macaques (Macaca fascicularis) are the most commonly used non-human primate for research and testing in Europe. Their principal use is in preclinical safety testing of new pharmaceuticals to assess risk of adverse effects, as indicated by changes in a core battery of physiological measures before human exposure. Regulatory studies are strictly controlled through legislation and codes of practices underpinned by the principles of humane science, the 3Rs; Replacement, Reduction and Refinement. Despite the link between good welfare and good science now universally made in codes of practice, legislation and the literature, there are few studies aimed at systematically examining the link and almost no quantitative data from cynomolgus macaques used for toxicology. The main aim of this thesis was to examine the link between Refinement, animal welfare and scientific output for this important animal model, piggy-backing on regulatory studies conducted by a large contract research organisation. In the laboratory, animal welfare is formally considered in terms of Refinement which has evolved to include both the reduction of negative welfare states and the proactive enhancement of positive welfare over the animal’s lifetime. A multidisciplinary approach to welfare assessment including measures of behaviour, physiology and physical health, and which built upon current unit procedures was undertaken to produce an overall assessment of welfare in cynomolgus macaques. Macaque facial expressions, vocalisations, activity and position in the home cage, body weight change, body condition and alopecia scores were found to be reliable indicators of welfare state and would be most feasible for care staff to monitor. The concept of quality of scientific output was defined in relation to toxicological findings and includes sensitivity, reliability and repeatability of individual measures in the core battery (e.g. heart rate, blood pressure, haematology, clinical chemistry and organ weights). The link between welfare and quality of scientific output was then systematically explored with Refinements to macaque use in regulatory studies. The first, a data mining study, undertaken to quantify the effects on biological data recorded from cynomolgus macaques, used in regulatory studies over an eight-year period as the CASE sponsor transitioned from single to permanent group housing, found the effects to be highly variable on individual parameters in the core battery and in some instances welfare-positive effects of group housing were confounded by concurrent changes in standard operating procedures. A further study of planned Refinements to macaque-care staff interaction through enhanced socialisation was found to help animals cope better with husbandry and scientific procedures and enhance quality of cardiovascular measures recorded at baseline. In light of these findings a number of recommendations are made including a framework of terms useful for measuring quality of scientific output, a welfare assessment framework and Refinements to husbandry and scientific procedures for cynomolgus macaques used in regulatory toxicology. Because of their capacity to suffer it is both ethically and scientifically important that macaque welfare is maximised and their use results in valid and reliable experimental outcomes informing on the safety and efficacy of new pharmaceuticals prior to human exposure.
|
264 |
Techniques de Bisimulation et Algorithmes pour la Programmation Concurrente par ContraintesAristizábal, Andrés 17 October 2012 (has links) (PDF)
Concurrence est concernée par les systèmes informatiques des agents multiples qui interagissent les uns avec les autres. Bisimilarité est l'un des principales représentantes de ces derniers. Programmation concurrente par contraintes (ccp) est un formalisme qui combine le point de vue traditionnel des formules algébriques et opérationnelles des calculs de processus avec une notion déclarative basée sur logique. La définition standard de bisimilarité n'est pas complètement satisfaisante pour ccp car il donne une équivalence qui est trop à grain fin. Nous introduisons une sémantique de transitions étiquetées et une notion de bisimilarité totalement abstraite à l'équivalence observationnelle en ccp. Lorsque l'espace d'état d'un système est fini, la notion ordinaire de bisimilarité peut être calculé par l'algorithme de partition de raffinement, mais, cet algorithme ne fonctionne pas pour la bisimilarité de ccp. Par conséquent, nous fournissons un algorithme que nous permet de vérifier bisimilarité forte pour ccp, en utilisant un pré-raffinement et une fonction de partition basée sur la bisimilarité irredondante. Bisimilarité faible est une équivalence comportementale obtenue en prenant en compte uniquement les actions qui sont observables dans le système. Typiquement, le raffinement de partition standard peut être utilisé pour décider bisimilarité faible simplement en utilisant la réduction de Milner allant de faible à forte. Nous démontrons que, en raison de ses impliquées transitions étiquetées, la technique mentionnée ci-dessus ne fonctionne pas pour ccp. Nous donnons une réduction qui nous permet d'utiliser cet algorithme pour ccp pour décider cette équivalence.
|
265 |
Une méthode basée sur le raffinement pour la modélisation de processusGOLRA, Fahad Rafique 08 January 2014 (has links) (PDF)
There is an increasing trend to consider the processes of an organization as one of its highly valuable assets. Processes are the reusable assets of an organization which define the procedures of routine working for accomplishing its goals. The software industry has the potential to become one of the most internationally dispersed high-tech industry. With growing importance of software and services sector, standardization of processes is also becoming crucial to maintain credibility in the market. Software development processes follow a lifecycle that is very similar to the software development lifecycle. Similarly, multiple phases of a process development lifecycle follow an iterative/incremental approach that leads to continuous process improvement. This incremental approach calls for a refinement based strategy to develop, execute and maintain software development processes. This thesis develops a conceptual foundation for refinement based development of software processes keeping in view the precise requirements for each individual phase of process development lifecycle. It exploits model driven engineering to present a multi-metamodel framework for the development of software processes, where each metamodel corresponds to a different phase of a process. A process undergoes a series of refinements till it is enriched with execution capabilities. Keeping in view the need to comply with the adopted standards, the architecture of process modeling approach exploits the concept of abstraction. This mechanism also caters for special circumstances where a software enterprise needs to follow multiple process standards for the same project. On the basis of the insights gained from the examination of contemporary offerings in this domain, the proposed process modeling framework tends to foster an architecture that is developed around the concepts of ''design by contract" and ''design for reuse". This allows to develop a process model that is modular in structure and guarantees the correctness of interactions between the constituent activities. Separation of concerns being the motivation, data-flow within a process is handled at a different abstraction level than the control-flow. Conformance between these levels allows to offer a bi-layered architecture that handles the flow of data through an underlying event management system. An assessment of the capabilities of the proposed approach is provided through a comprehensive patterns-based analysis, which allows a direct comparison of its functionality with other process modeling approaches.
|
266 |
Exploiting Model Structure in CEGAR Verification MethodChucri, Farès 27 November 2012 (has links) (PDF)
Les logiciels sont désormais un des composants essentiels des équipements modernes. Ils sont responsables de leur sûreté et fiabilité. Par sûreté, nous entendons que le système garantit que ''rien de dangereux n'arrive jamais''. Ce type de propriété peut se réduire à un problème d'accessibilité: pour démontrer la propriété il suffit de démontrer qu'un ensemble d'états ''dangereux'' ne sont pas atteignables. Ceci est particulièrement important pour les systèmes critiques: les systèmes dont une défaillance peut mettre en jeu des vies humaines ou l'économie d'une entreprise. Afin de garantir un niveau de confiance suffisant dans nos équipements modernes, un grand nombre de méthodes de vérification ont étaient proposées. Ici nous nous intéressons au model checking: une méthode formelle de vérification de système. L'utilisation de méthodes de model checking et de model checker permet d'améliorer les analyses de sécurité des systèmes critiques, car elles permettent de garantir l'absence de bug vis-à-vis des propriétés spécifiées. De plus, le model checking est une méthode automatique, ceci permet à des utilisateurs non-spécialistes d'utiliser ces outils. Ceci permet l'utilisation de cette méthode à une grande communauté d'utilisateur dans différents contextes industriels. Mais le problème de l'explosion combinatoire de l'espace des états reste une difficulté qui limite l'utilisation de cette méthode dans un contexte industriel. Nous présentons deux méthodes de vérification de modèle AltaRica. La première méthode présente un algorithme CEGAR qui élague des états de l'abstraction, ce qui permet d'utiliser une sous-approximation de l'espace des états d'un système. Grâce à l'utilisation de cette sous-approximation, nous pouvons détecter des contre-exemples simples, utiliser des méthodes de réduction pour éliminer des états abstraits, ce qui nous permet de minimiser le coût de l'analyse des contre-exemples, et guider l'exploration de l'abstraction vers des contre-exemples qui sont plus pertinents. Nous avons développé cet algorithme dans le model checker Mec 5, et les expérimentations réalisées ont confirmé les améliorations attendues.
|
267 |
Prediction of the formation of adiabatic shear bands in high strength low alloy 4340 steel through analysis of grains and grain deformationPolyzois, Ioannis 02 December 2014 (has links)
High strain rate plastic deformation of metals results in the formation of localized zones of severe shear strain known as adiabatic shear bands (ASBs), which are a precursor to shear failure. The formation of ASBs in a high-strength low alloy steel, namely AISI 4340, was examined based on prior heat treatments (using different austenization and tempering temperatures), testing temperatures, and impact strain rates in order to map out grain size and grain deformation behaviour during the formation of ASBs. In the current experimental investigation, ASB formation was shown to be a microstructural phenomenon which depends on microstructural properties such as grain size, shape, orientation, and distribution of phases and hard particles—all controlled by the heat treatment process. Each grain is unique and its material properties are heterogeneous (based on its size, shape, and the complexity of the microstructure within the grain). Using measurements of grain size at various heat treatments as well as dynamic stress-strain data, a finite element model was developed using Matlab and explicit dynamic software LSDYNA to simulate the microstructural deformation of grains during the formation of ASBs. The model simulates the geometrical grain microstructure of steel in 2D using the Voronoi Tessellation algorithm and takes into account grain size, shape, orientation, and microstructural material property inhomogeneity between the grains and grain boundaries. The model takes advantage of the Smooth Particle Hydrodynamics (SPH) meshless method to simulate highly localized deformation as well as the Johnson-Cook Plasticity material model for defining the behavior of the steel at various heat treatments under high strain rate deformation.The Grain Model provides a superior representation of the kinematics of ASB formation on the microstructural level, based on grain size, shape and orientation. It is able to simulate the microstructural mechanism of ASB formation and grain refinement in AISI 4340 steel, more accurately and realistically than traditional macroscopic models, for a wide range of heat treatment and testing conditions.
|
268 |
On Bootstrap Evaluation of Tests for Unit Root and CointegrationWei, Jianxin January 2014 (has links)
This thesis is comprised of five papers that all relate to bootstrap methodology in analysis of non-stationary time series. The first paper starts with the fact that the Dickey-Fuller unit root test using asymptotic critical value has bad small sample performance. The small sample correction proposed by Johansen (2004) and bootstrap are two effective methods to improve the performance of the test. In this paper we compare these two methods as well as analyse the effect of bias-adjusting through a simulation study. We consider AR(1) and AR(2) models and both size and power properties are investigated. The second paper studies the asymptotic refinement of the bootstrap cointegration rank test. We expand the test statistic of a simplified VECM model and a Monte Carlo simulation was carried out to verify that the bootstrap test gives asymptotic refinement. The third paper focuses on the number of bootstrap replicates in bootstrap Dickey-Fuller unit root test. Through a simulation study, we find that a small number of bootstrap replicates are sufficient for a precise size, but, with too small number of replicates, we will lose power when the null hypothesis is not true. The fourth and last paper of the thesis concerns unit root test in panel setting focusing on the test proposed by Palm, Smeekes and Urbain (2011). In the fourth paper, we study the robustness of the PSU test with comparison with two representative tests from the second generation panel unit root tests. In the last paper, we generalise the PSU test to the model with deterministic terms. Two different methods are proposed to deal with the deterministic terms, and the asymptotic validity of the bootstrap procedure is theoretically checked. The small sample properties are studied by simulations and the paper is concluded by an empirical example. / <p>Ogiltigt ISBN: 978-91-554-9069-0</p>
|
269 |
Constructing quantum spacetime : relation to classical gravitySteinhaus, Sebastian January 2014 (has links)
Despite remarkable progress made in the past century, which has revolutionized our understanding of the universe, there are numerous open questions left in theoretical physics. Particularly important is the fact that the theories describing the fundamental interactions of nature are incompatible. Einstein's theory of general relative describes gravity as a dynamical spacetime, which is curved by matter and whose curvature determines the motion of matter. On the other hand we have quantum field theory, in form of the standard model of particle physics, where particles interact via the remaining interactions - electromagnetic, weak and strong interaction - on a flat, static spacetime without gravity.
A theory of quantum gravity is hoped to cure this incompatibility by heuristically replacing classical spacetime by quantum spacetime'. Several approaches exist attempting to define such a theory with differing underlying premises and ideas, where it is not clear which is to be preferred. Yet a minimal requirement is the compatibility with the classical theory, they attempt to generalize.
Interestingly many of these models rely on discrete structures in their definition or postulate discreteness of spacetime to be fundamental. Besides the direct advantages discretisations provide, e.g. permitting numerical simulations, they come with serious caveats requiring thorough investigation: In general discretisations break fundamental diffeomorphism symmetry of gravity and are generically not unique. Both complicates establishing the connection to the classical continuum theory.
The main focus of this thesis lies in the investigation of this relation for spin foam models. This is done on different levels of the discretisation / triangulation, ranging from few simplices up to the continuum limit. In the regime of very few simplices we confirm and deepen the connection of spin foam models to discrete gravity. Moreover, we discuss dynamical, e.g. diffeomorphism invariance in the discrete, to fix the ambiguities of the models. In order to satisfy these conditions, the discrete models have to be improved in a renormalisation procedure, which also allows us to study their continuum dynamics. Applied to simplified spin foam models, we uncover a rich, non--trivial fixed point structure, which we summarize in a phase diagram. Inspired by these methods, we propose a method to consistently construct the continuum theory, which comes with a unique vacuum state. / Trotz bemerkenswerter Fortschritte im vergangenen Jahrhundert, die unser Verständnis des Universums revolutioniert haben, gibt es noch zahlreiche ungeklärte Fragen in der theoretischen Physik. Besondere Bedeutung kommt der Tatsache zu, dass die Theorien, welche die fundamentalen Wechselwirkungen der Natur beschreiben, inkompatibel sind. Nach Einsteins allgemeiner Relativitätstheorie wird die Gravitation durch eine dynamische Raumzeit dargestellt, die von Materie gekrümmt wird und ihrerseits durch die Krümmung die Bewegung der Materie bestimmt. Dem gegenüber steht die Quantenfeldtheorie, die die verbliebenen Wechselwirkungen - elektromagnetische, schwache und starke Wechselwirkung - im Standardmodell der Teilchenphysik beschreibt, in dem Teilchen auf einer statischen Raumzeit -- ohne Gravitation -- miteinander interagieren.
Die Hoffnung ist, dass eine Theorie der Quantengravitation diese Inkompatibilität beheben kann, indem, heuristisch, die klassische Raumzeit durch eine 'Quantenraumzeit' ersetzt wird. Es gibt zahlreiche Ansätze eine solche Theorie zu definieren, die auf unterschiedlichen Prämissen und Ideen beruhen, wobei a priori nicht klar ist, welche zu bevorzugen sind. Eine Minimalanforderung an diese Theorien ist Kompatibilität mit der klassischen Theorie, die sie verallgemeinern sollen.
Interessanterweise basieren zahlreiche Modelle in ihrer Definition auf Diskretisierungen oder postulieren eine fundamentale Diskretheit der Raumzeit. Neben den unmittelbaren Vorteilen, die Diskretisierungen bieten, z.B. das Ermöglichen numerischer Simulationen, gibt es auch gravierende Nachteile, die einer ausführlichen Untersuchung bedürfen: Im Allgemeinen brechen Diskretisierungen die fundamentale Diffeomorphismensymmetrie der Gravitation und sind in der Regel nicht eindeutig definiert. Beides erschwert die Wiederherstellung der Verbindung zur klassischen, kontinuierlichen Theorie.
Das Hauptaugenmerk dieser Doktorarbeit liegt darin diese Verbindung insbesondere für Spin-Schaum-Modelle (spin foam models) zu untersuchen. Dies geschieht auf sehr verschiedenen Ebenen der Diskretisierung / Triangulierung, angefangen bei wenigen Simplizes bis hin zum Kontinuumslimes. Im Regime weniger Simplizes wird die bekannte Verbindung von Spin--Schaum--Modellen zu diskreter Gravitation bestätigt und vertieft. Außerdem diskutieren wir dynamische Prinzipien, z.B. Diffeomorphismeninvarianz im Diskreten, um die Ambiguitäten der Modelle zu fixieren. Um diese Bedingungen zu erfüllen, müssen die diskreten Modelle durch Renormierungsverfahren verbessert werden, wodurch wir auch ihre Kontinuumsdynamik untersuchen können. Angewandt auf vereinfachte Spin-Schaum-Modelle finden wir eine reichhaltige, nicht-triviale Fixpunkt-Struktur, die wir in einem Phasendiagramm zusammenfassen. Inspiriert von diesen Methoden schlagen wir zu guter Letzt eine konsistente Konstruktionsmethode für die Kontinuumstheorie vor, die einen eindeutigen Vakuumszustand definiert.
|
270 |
Applications of Generic Interpolants In the Investigation and Visualization of Approximate Solutions of PDEs on Coarse Unstructured MeshesGoldani Moghaddam, Hassan 12 August 2010 (has links)
In scientific computing, it is very common to visualize the approximate solution obtained by a numerical PDE solver by drawing surface or contour plots of all or some components of the associated approximate solutions. These plots are used to investigate the behavior of the solution and to display important properties or characteristics of the approximate solutions. In this thesis, we consider techniques for drawing such contour plots for the solution of two and three dimensional PDEs. We first present three fast contouring algorithms in two dimensions over an underlying unstructured mesh. Unlike standard contouring algorithms, our algorithms do not require a fine structured approximation. We assume that the underlying PDE solver generates approximations at some scattered data points in the domain of interest. We then generate a piecewise cubic polynomial interpolant (PCI) which approximates the solution of a PDE at off-mesh points based on the DEI (Differential Equation Interpolant) approach. The DEI approach assumes that accurate approximations to the solution and first-order derivatives exist at a set of discrete mesh points. The extra information required to uniquely define the associated piecewise polynomial is determined based on almost satisfying the PDE at a set of collocation points. In the process of generating contour plots, the PCI is used whenever we need an accurate approximation at a point inside the domain. The direct extension of the both DEI-based interpolant and the contouring algorithm to three dimensions is also investigated.
The use of the DEI-based interpolant we introduce for visualization can also be used to develop effective Adaptive Mesh Refinement (AMR) techniques and global error estimates. In particular, we introduce and investigate four AMR techniques along with a hybrid mesh refinement technique. Our interest is in investigating how well such a `generic' mesh selection strategy, based on properties of the problem alone, can perform compared with a special-purpose strategy that is designed for a specific PDE method. We also introduce an \`{a} posteriori global error estimator by introducing the solution of a companion PDE defined in terms of the associated PCI.
|
Page generated in 0.0595 seconds