Spelling suggestions: "subject:"higherorder"" "subject:"highorder""
271 |
Logique du vague : survol des principales théoriesGirard, Claire 08 1900 (has links)
No description available.
|
272 |
Adaptive Mesh Refinement Solution Techniques for the Multigroup SN Transport Equation Using a Higher-Order Discontinuous Finite Element MethodWang, Yaqi 16 January 2010 (has links)
In this dissertation, we develop Adaptive Mesh Refinement (AMR) techniques
for the steady-state multigroup SN neutron transport equation using a higher-order
Discontinuous Galerkin Finite Element Method (DGFEM). We propose two error estimations,
a projection-based estimator and a jump-based indicator, both of which
are shown to reliably drive the spatial discretization error down using h-type AMR.
Algorithms to treat the mesh irregularity resulting from the local refinement are
implemented in a matrix-free fashion. The DGFEM spatial discretization scheme
employed in this research allows the easy use of adapted meshes and can, therefore,
follow the physics tightly by generating group-dependent adapted meshes. Indeed,
the spatial discretization error is controlled with AMR for the entire multigroup SNtransport
simulation, resulting in group-dependent AMR meshes. The computing
efforts, both in memory and CPU-time, are significantly reduced. While the convergence
rates obtained using uniform mesh refinement are limited by the singularity
index of transport solution (3/2 when the solution is continuous, 1/2 when it is discontinuous),
the convergence rates achieved with mesh adaptivity are superior. The
accuracy in the AMR solution reaches a level where the solution angular error (or ray
effects) are highlighted by the mesh adaptivity process. The superiority of higherorder
calculations based on a matrix-free scheme is verified on modern computing architectures.
A stable symmetric positive definite Diffusion Synthetic Acceleration (DSA)
scheme is devised for the DGFEM-discretized transport equation using a variational
argument. The Modified Interior Penalty (MIP) diffusion form used to accelerate the
SN transport solves has been obtained directly from the DGFEM variational form of
the SN equations. This MIP form is stable and compatible with AMR meshes. Because
this MIP form is based on a DGFEM formulation as well, it avoids the costly
continuity requirements of continuous finite elements. It has been used as a preconditioner
for both the standard source iteration and the GMRes solution technique
employed when solving the transport equation. The variational argument used in
devising transport acceleration schemes is a powerful tool for obtaining transportconforming
diffusion schemes.
xuthus, a 2-D AMR transport code implementing these findings, has been developed
for unstructured triangular meshes.
|
273 |
New methods for estimation, modeling and validation of dynamical systems using automatic differentiationGriffith, Daniel Todd 17 February 2005 (has links)
The main objective of this work is to demonstrate some new computational methods
for estimation, optimization and modeling of dynamical systems that use automatic
differentiation. Particular focus will be upon dynamical systems arising in Aerospace
Engineering. Automatic differentiation is a recursive computational algorithm, which
enables computation of analytically rigorous partial derivatives of any user-specified
function. All associated computations occur, in the background without user
intervention, as the name implies. The computational methods of this dissertation are
enabled by a new automatic differentiation tool, OCEA (Object oriented Coordinate
Embedding Method). OCEA has been recently developed and makes possible efficient
computation and evaluation of partial derivatives with minimal user coding. The key
results in this dissertation details the use of OCEA through a number of computational
studies in estimation and dynamical modeling.
Several prototype problems are studied in order to evaluate judicious ways to use
OCEA. Additionally, new solution methods are introduced in order to ascertain the
extended capability of this new computational tool. Computational tradeoffs are studied
in detail by looking at a number of different applications in the areas of estimation,
dynamical system modeling, and validation of solution accuracy for complex dynamical
systems. The results of these computational studies provide new insights and indicate
the future potential of OCEA in its further development.
|
274 |
Functional Genetic Analysis Reveals Intricate Roles of Conserved X-box Elements in Yeast Transcriptional RegulationVoll, Sarah 13 November 2013 (has links)
Understanding the functional impact of physical interactions between proteins and
DNA on gene expression is important for developing approaches to correct disease-associated gene dysregulation. I conducted a systematic, functional genetic analysis of protein-DNA interactions in the promoter region of the yeast ribonucleotide reductase
subunit gene RNR3. I measured the transcriptional impact of systematically
perturbing the major transcriptional regulator, Crt1, and three X-box sites on the
DNA known to physically bind Crt1. This analysis revealed interactions between
two of the three X-boxes in the presence of Crt1, and unexpectedly, a significant
functional role of the X-boxes in the absence of Crt1. Further analysis revealed Crt1-
independent regulators of RNR3 that were impacted by X-box perturbation. Taken
together, these results support the notion that higher-order X-box-mediated interactions
are important for RNR3 transcription, and that the X-boxes have unexpected roles in the regulation of RNR3 transcription that extend beyond their interaction with Crt1.
|
275 |
Improving predictions for collider observables by consistently combining fixed order calculations with resummed results in perturbation theorySchönherr, Marek 12 March 2012 (has links) (PDF)
With the constantly increasing precision of experimental data acquired at the current collider experiments Tevatron and LHC the theoretical uncertainty on the prediction of multiparticle final states has to decrease accordingly in order to have meaningful tests of the underlying theories such as the Standard Model. A pure leading order calculation, defined in the perturbative expansion of said theory in the interaction constant, represents the classical limit to such a quantum field theory and was already found to be insufficient at past collider experiments, e.g. LEP or Hera. Such a leading order calculation can be systematically improved in various limits. If the typical scales of a process are large and the respective coupling constants are small, the inclusion of fixed-order higher-order corrections then yields quickly converging predictions with much reduced uncertainties. In certain regions of the phase space, still well within the perturbative regime of the underlying theory, a clear hierarchy of the inherent scales, however, leads to large logarithms occurring at every order in perturbation theory. In many cases these logarithms are universal and can be resummed to all orders leading to precise predictions in these limits. Multiparticle final states now exhibit both small and large scales, necessitating a description using both resummed and fixed-order results. This thesis presents the consistent combination of two such resummation schemes with fixed-order results. The main objective therefor is to identify and properly treat terms that are present in both formulations in a process and observable independent manner.
In the first part the resummation scheme introduced by Yennie, Frautschi and Suura (YFS), resumming large logarithms associated with the emission of soft photons in massive Qed, is combined with fixed-order next-to-leading matrix elements. The implementation of a universal algorithm is detailed and results are studied for various precision observables in e.g. Drell-Yan production or semileptonic B meson decays. The results obtained for radiative tau and muon decays are also compared to experimental data.
In the second part the resummation scheme introduced by Dokshitzer, Gribov, Lipatov, Altarelli and Parisi (DGLAP), resumming large logarithms associated with the emission of collinear partons applicable to both Qcd and Qed, is combined with fixed-order next-to-leading matrix elements. While the focus rests on its application to Qcd corrections, this combination is discussed in detail and the implementation is presented. The resulting predictions are evaluated and compared to experimental data for a multitude of processes in four different collider environments. This formulation has been further extended to accommodate real emission corrections to beyond next-to-leading order radiation otherwise described only by the DGLAP resummation. Its results are also carefully evaluated and compared to a wide range of experimental data.
|
276 |
Uniform Error Estimation for Convection-Diffusion ProblemsFranz, Sebastian 27 February 2014 (has links) (PDF)
Let us consider the singularly perturbed model problem
Lu := -epsilon laplace u-bu_x+cu = f
with homogeneous Dirichlet boundary conditions on the unit-square (0,1)^2. Assuming that b > 0 is of order one, the small perturbation parameter 0 < epsilon << 1 causes boundary layers in the solution.
In order to solve above problem numerically, it is beneficial to resolve these layers. On properly layer-adapted meshes we can apply finite element methods and observe convergence.
We will consider standard Galerkin and stabilised FEM applied to above problem. Therein the polynomial order p will be usually greater then two, i.e. we will consider higher-order methods.
Most of the analysis presented here is done in the standard energy norm. Nevertheless, the question arises: Is this the right norm for this kind of problem, especially if characteristic layers occur? We will address this question by looking into a balanced norm.
Finally, a-posteriori error analysis is an important tool to construct adapted meshes iteratively by solving discrete problems, estimating the error and adjusting the mesh accordingly. We will present estimates on the Green’s function associated with L, that can be used to derive pointwise error estimators.
|
277 |
Characterization of nonlinearity parameters in an elastic material with quadratic nonlinearity with a complex wave fieldBraun, Michael Rainer 19 November 2008 (has links)
This research investigates wave propagation in an elastic half-space with a
quadratic nonlinearity in its stress-strain relationship. Different boundary conditions
on the surface are considered that result in both one- and two-dimensional wave
propagation problems. The goal of the research is to examine the generation of
second-order frequency effects and static effects which may be used to determine
the nonlinearity present in the material. This is accomplished by extracting the
amplitudes of those effects in the frequency domain and analyzing their dependency
on the third-order elastic constants (TOEC). For the one-dimensional problems, both
analytical approximate solutions as well as numerical simulations are presented. For
the two-dimensional problems, numerical solutions are presented whose dependency
on the material's nonlinearity is compared to the one-dimensional problems. The
numerical solutions are obtained by first formulating the problem as a hyperbolic
system of conservation laws, which is then solved numerically using a semi-discrete
central scheme. The numerical method is implemented using the package CentPack.
In the one-dimensional cases, it is shown that the analytical and numerical solutions
are in good agreement with each other, as well as how different boundary conditions
may be used to measure the TOEC. In the two-dimensional cases, it is shown that
there exist comparable dependencies of the second-order frequency effects and static
effects on the TOEC. Finally, it is analytically and numerically investigated how
multiple reflections in a plate can be used to simplify measurements of the material
nonlinearity in an experiment.
|
278 |
Vibration Signal Features for the Quantification of Prosthetic Loosening in Total Hip ArthroplastiesStevenson, Nathan January 2003 (has links)
This project attempts to quantify the integrity of the fixation of total hip arthro- T plasties (THAs) by observing vibration signal features. The aim of this thesis is, therefore, to find the signal differences between firm and loose prosthesis. These difference will be expressed in different transformed domains with the expectation that a certain domain will provide superior results. Once the signal differences have been determined they will be examined for their ability to quantify the looseness. Initially, a new definition of progressive, femoral component loosening was created, based on the application of mechanical fit, involving four general conditions. In order of increasing looseness the conditions (with their equivalent engineering associations) are listed as, firm (adherence), firm (interference), micro-loose (transition) and macro-loose (clearance). These conditions were then used to aid in the development and evaluation of a simple mathematical model based on an ordinary differential equation. Several possible parameters well suited to quantification such as gap displacement, cement/interface stiffness and apparent mass were the identified from the model. In addition, the development of this model provided a solution to the problem of unifying early and late loosening mentioned in the literature by Li et al. in 1995 and 1996. This unification permitted early (micro loose) and late (macro loose) loosening to be quantified, if necessary, with the same parameter. The quantification problem was posed as a detection problem by utilising a varying amplitude input. A set of detection techniques were developed to detect the quantity of a critical value, in this case a force. The detection techniques include deviation measures of the instantaneous frequency of the impulse response of the system (accuracy of 100%), linearity of the systems response to Gaussian input (total accuracy of 97.9% over all realisations) and observed resonant frequency linearity with respect to displacement magnitude (accuracy of 100%). Note, that as these techniques were developed with the model in mind their simulated performance was, therefore, considerably high. This critical value found by the detector was then fed into the model and a quantified output was calculated. The quantification techniques using the critical value approach include, ramped amplitude input resonant analysis (experimental accuracy of 94%) and ramped amplitude input stochastic analysis (experimental accuracy of 90%). These techniques were based on analysing the response of the system in the time-frequency domain and with respect to its short-time statistical moments to a ramping amplitude input force, respectively. In addition, other mechanically sound forms of analysis, were then applied to the output of the nonlinear model with the aim of quantifying the looseness or the integrity of fixation of the THA. The cement/interface stiffness and apparent mass techniques, inspired by the work of Chung et.al. in 1979, attempt to assess the integrity of fixation of the THA by tracking the mechanical behaviour of the components of the THA, using the frequency and magnitude of the raw transducer data. This technique has been developed fron the theory of Chung etal but with a differing perspective and provides accuracies of 82% in experimentation and 71% in simulation for the apparent mass and interface stiffness techniques, respectively. Theses techniques do not quantify all forms of clinical loosening, as clinical loosening can exist in many different forms, but they do quantify mechanical loosening or the mechanical functionality of the femoral component through related parameters that observe reduction in mechanical mass, stiffness and the amount of rattle generated by a select ghap betweent he bone/cement or prosthesis/cement interface. This form of mechanical loosening in currently extremely difficult to detect using radiographs. It is envisaged that a vibration test be used in conjunction with radiographs to provide a more complete picture of the integrity of fixation of the THA.
|
279 |
Mathematical imaging tools in cancer research : from mitosis analysis to sparse regularisationGrah, Joana Sarah January 2018 (has links)
This dissertation deals with customised image analysis tools in cancer research. In the field of biomedical sciences, mathematical imaging has become crucial in order to account for advancements in technical equipment and data storage by sound mathematical methods that can process and analyse imaging data in an automated way. This thesis contributes to the development of such mathematically sound imaging models in four ways: (i) automated cell segmentation and tracking. In cancer drug development, time-lapse light microscopy experiments are conducted for performance validation. The aim is to monitor behaviour of cells in cultures that have previously been treated with chemotherapy drugs, since atypical duration and outcome of mitosis, the process of cell division, can be an indicator of successfully working drugs. As an imaging modality we focus on phase contrast microscopy, hence avoiding phototoxicity and influence on cell behaviour. As a drawback, the common halo- and shade-off effect impede image analysis. We present a novel workflow uniting both automated mitotic cell detection with the Hough transform and subsequent cell tracking by a tailor-made level-set method in order to obtain statistics on length of mitosis and cell fates. The proposed image analysis pipeline is deployed in a MATLAB software package called MitosisAnalyser. For the detection of mitotic cells we use the circular Hough transform. This concept is investigated further in the framework of image regularisation in the general context of imaging inverse problems, in which circular objects should be enhanced, (ii) exploiting sparsity of first-order derivatives in combination with the linear circular Hough transform operation. Furthermore, (iii) we present a new unified higher-order derivative-type regularisation functional enforcing sparsity of a vector field related to an image to be reconstructed using curl, divergence and shear operators. The model is able to interpolate between well-known regularisers such as total generalised variation and infimal convolution total variation. Finally, (iv) we demonstrate how we can learn sparsity promoting parametrised regularisers via quotient minimisation, which can be motivated by generalised Eigenproblems. Learning approaches have recently become very popular in the field of inverse problems. However, the majority aims at fitting models to favourable training data, whereas we incorporate knowledge about both fit and misfit data. We present results resembling behaviour of well-established derivative-based sparse regularisers, introduce novel families of non-derivative-based regularisers and extend this framework to classification problems.
|
280 |
Complexidade descritiva das lógicas de ordem superior com menor ponto fixo e análise de expressividade de algumas lógicas modais / Descriptive complexity of the logic of higher order with lower fixed point and analysis of expression of some modal logicsFreire, Cibele Matos January 2010 (has links)
Submitted by guaracy araujo (guaraa3355@gmail.com) on 2016-06-14T19:46:59Z
No. of bitstreams: 1
2010_dis_cmfreire.pdf: 426798 bytes, checksum: 4ad13c09839833ee22b0396a445e8a26 (MD5) / Approved for entry into archive by guaracy araujo (guaraa3355@gmail.com) on 2016-06-14T19:48:16Z (GMT) No. of bitstreams: 1
2010_dis_cmfreire.pdf: 426798 bytes, checksum: 4ad13c09839833ee22b0396a445e8a26 (MD5) / Made available in DSpace on 2016-06-14T19:48:16Z (GMT). No. of bitstreams: 1
2010_dis_cmfreire.pdf: 426798 bytes, checksum: 4ad13c09839833ee22b0396a445e8a26 (MD5)
Previous issue date: 2010 / In Descriptive Complexity, we investigate the use of logics to characterize computational classes os problems through complexity. Since 1974, when Fagin proved that the class NP is captured by existential second-order logic, considered the rst result in this area, other relations between logics and complexity classes have been established. Wellknown results usually involve rst-order logic and its extensions, and complexity classes in polynomial time or space. Some examples are that the rst-order logic extended by the least xed-point operator captures the class P and the second-order logic extended by the transitive closure operator captures the class PSPACE. In this dissertation, we will initially analyze the expressive power of some modal logics with respect to the decision problem REACH and see that is possible to express it with temporal logics CTL and CTL . We will also analyze the combined use of higher-order logics extended by the least xed-point operator and obtain as result that each level of this hierarchy captures each level of the deterministic exponential time hierarchy. As a corollary, we will prove that the hierarchy of HOi(LFP), for i 2, does not collapse, that is, HOi(LFP) HOi+1(LFP) / Em Complexidade Descritiva investigamos o uso de logicas para caracterizar classes problemas pelo vies da complexidade. Desde 1974, quando Fagin provou que NP e capturado pela logica existencial de segunda-ordem, considerado o primeiro resultado da area, outras relac~oes entre logicas e classes de complexidade foram estabelecidas. Os resultados mais conhecidos normalmemte envolvem logica de primeira-ordem e suas extens~oes, e classes de complexidade polinomiais em tempo ou espaco. Alguns exemplos são que a l ogica de primeira-ordem estendida com o operador de menor ponto xo captura a clsse P e que a l ogica de segunda-ordem estendida com o operador de fecho transitivo captura a classe PSPACE. Nesta dissertação, analisaremos inicialmente a expressividade de algumas l ogicas modais com rela cão ao problema de decisão REACH e veremos que e poss vel express a-lo com as l ogicas temporais CTL e CTL . Analisaremos tamb em o uso combinado de l ogicas de ordem superior com o operador de menor ponto xo e obteremos como resultado que cada n vel dessa hierarquia captura cada n vel da hierarquia determin stica em tempo exponencial. Como corol ario, provamos que a hierarquia de HOi(LFP) não colapsa, ou seja, HOi(LFP) HOi+1(LFP) / FREIRE, Cibele Matos. Complexidade descritiva das lógicas de ordem superior com menor ponto fixo e análise de expressividade de algumas lógicas modais. 2010. 54 f. : Dissertação (mestrado) - Universidade Federal do Ceará, Centro de Ciências, Departamento de Computação, Fortaleza-CE, 2010.
|
Page generated in 0.0599 seconds