11 |
An investigation of renormalization group methods in the study of fluid turbulence, and their development for large eddy simulationsHunter, Adrian January 2000 (has links)
Turbulence is a problem of chaotic motion involving many length and time scales. When the Navier-Stokes equation is Fourier transformed, it comes to resemble a many-body problem in statistical physics, and as such is amenable to treatment by the Renormalization Group (RG) to reduce the number of degrees of freedom. The work of this thesis builds upon the RG approach due to McComb and Watt [Phys. Rev. A 46, 4797 (1992)], referred to as Two-field theory. After presenting a brief introduction to turbulence in general and other theories, we review the Two-field approach. An extension of the idea of <i>conditional averaging, </i>central to Two-field theory, is made to the contrasting RG theory of Forster, Nelson and Stephen [Phys. Rev. A 16, 732 (1977)] to emphasize how it there addresses the question of deterministic connection of turbulent modes, resolving a long-standing criticism of that work relatively simple means. The results of Two-field theory, in the form of an eddy viscosity, are tested <i>a posteriori</i> in a high resolution large eddy simulation (LES) for the first time. The results are reviewed with the aim of pursuing further investigations into Two-field theory treatment of turbulent dynamics. In particular the effect of the cross-term <i>u<sup>-</sup>u<sup>+</sup></i> is important since Two-field theory deals with this term less well at the momentum equation level. These investigations are carried out by theoretical and numerical means. Theoretically, we try to account for some of the cross term by the use of graded spectral filtering within the RG procedure, which may have some relevance to work on graded filters used in the mixed modelling presented later. The results are also tested briefly in a numerical simulation.
|
12 |
The geometry of immobilizing sets of objectsNsubuga, Saul Hannington January 2003 (has links)
A new proof of Czyzowicz, Stojmenovic and Urrutia’s theorem giving necessary and sufficient geometric conditions for immobilizing a triangle is obtained. The same method of proof is employed to obtain proofs of statements on immobilizing sets of polygonal planar objects. In three dimensions, a detailed study of immobilizing sets of a tetrahedron is carried out. A 3 x 3 matrix <i>R</i> is defined for each quadruple of points, one from the interior of each face of the tetrahedron using a good choice of outward normal vectors to the faces of the tetrahedron. A necessary and sufficient condition on the quadruple of points to immobilize the tetrahedron is that the matrix <i>R</i> is symmetric. An analysis of the eigenvalues of symmetric matrix <i>R</i> leads to a new proof of Bracho, Mayer, Fetter and Montejano’s theorem. This proof is adapted to give another treatment of necessary and sufficient conditions characterizing immobilizing sets of a triangle. The set of centroids, set of circumcenters and set of orthocenters of the faces of a tetrahedron are shown to immoblize it is appropriate cases. It is shown that a set of four immobilizing points one in each face of the tetrahedron has five degrees of freedom and immoblizing sets of a tetrahedron having two fixed points have one degree of freedom. An analysis of the orientation of the tetrahedron whose vertices are the points in an immobilizing set of a given tetrahedron reveals the existence of immobilizing sets of a regular tetrahedron which are co-planar. In higher dimensions, a method of generating sets of points for which the matrix <i>R</i> is symmetric from another such set is presented and some geometrical properties arising from the symmetry of <i>R</i> are analysed.
|
13 |
Lattice gauge theory calculations of hadron phenomenologyMcNeile, Craig January 1992 (has links)
In this thesis I study Quantum Chromodynamics on the lattice. A central theme will be the concept of improvement; this is choosing the lattice Lagrangian to minimise the effects of the lattice spacing on the results from numerical simulation. The first chapter reviews lattice gauge theory and introduces the idea of improvement. The techniques used in numerical simulations are briefly described. The second chapter will discuss whether an improved fermion lattice action called the clover action, obeys the reflection positivity condition. This is related to the existence of a transfer matrix. In the third chapter I will study the clover action in the strong coupling limit. Results for the pion and rho masses will be reported. A calculation of the O(a) lattice artifact correction to the gluon vacuum polarisation diagram for the clover action is described in chapter 4. The penultimate chapter contains results from various numerical simulations of lattice QCD using the clover action. The masses of some P-wave mesons will be reported, and used in a calculation of the QCD coupling. Results from a simulation of particles at finite momentum will be discussed.
|
14 |
New approaches to particle spectra in lattice QCDBaxter, Robert M. January 1993 (has links)
I present a series of calculations of the spectrum of low-lying hadron states using the formalism of lattice QCD. I discuss the approach taken by the UKQCD Collaboration in the simulation of QCD on the Grand Challenge supercomputer at Edinburgh, and the use of an improved action to reduce the discretisation errors of the computer model. I describe the techniques used in calculating hadron masses and decay constants from first principles and discuss the implementation of a spectrum-analysis program on the massively-parallel Grand Challenge machine. I introduce a new method of smearing hadron operators to improve their signals in simulations and describe the development and implementation of the Jacobi smearing algorithm. I present the results of UKQCD's light hadron spectrum calculations, including a new method of analysing <I>SU</I>(3) flavour symmetry breaking for hadrons composed of non-degenerate quarks. I discuss the calculation of the masses of hadrons composed of <I>u, d </I>and <I>s </I>quarks and present mass estimates for the <I>a</I><SUB>o</SUB>, <I>a</I><SUB>1</SUB>, <I>b<SUB>1</SUB></I>, <I>K</I>*, φ and η<SUB>s</SUB> mesons and the nucleon, , Σ, Δ,(<SUP>3</SUP>/<SUB>2</SUB>), Ω, Δ(<SUP>1</SUP>/<SUB>2</SUB>), <I>N </I>and Δ(<SUP>3</SUP>/<SUB>2</SUB>) baryons. Finally, I extend the ideas of combining degenerate and non-degenerate datasets to include calculations of mesons composed of one heavy and one light quark. I indicate how the quark mass dependence of the pseudoscalar and vector mesons may be described by a single function for the regime 0 ≤ <I>m</I><SUB>quark</SUB> ≤ <I>m</I><SUB>charm</SUB>. Using this method, I present mass estimates for the <I>D, D*, D<SUB>s</SUB></I>, <I>D*</I><SUB>s</SUB>, η<SUB>c</SUB> and <I>J</I>/ψ mesons.
|
15 |
Lattice QEDDe Souza, Stephen William January 1990 (has links)
We consider the question of the existence of an interacting continuum limit of Quantum Electrodynamics (QED). After a mention of why this limit may not exist and a discussion of how to formulate QED on a spacetime lattice we review the recent analytic and numerical work on the strong-coupling phase of QED. We take the view that there definitely exists a strong-coupling fixed point in the space of bare parameters but that the behaviour of renormalised quantities in its neighbourhood is not yet understood. For non-compact lattice QED with staggered fermions we develop an expansion in the inverse bare fermion mass that we use to calculate charge and fermion mass renormalisation. We evalute the vacuum polarisation to sixth order and present Feynman rules that allow its evaluation to higher orders. We also calculate the mass of the lowest lying pseudoscalar bound state and the chiral condensate. These physical quantities enable us to construct renormalisation group flow for all values of the bare charge. The expansion is checked against lattice perturbation theory and leads to a systematically improvable bound on the renormalised charge at the new fixed point. We also discuss compact QED coupled to scalars and find a chiral symmetry breaking transition at a non-zero value of the scalar coupling by using mean field theory. After establishing that this transition has Landau exponents we attempt to develop corrections to mean field theory by introducing fluctuations. The conclusion discusses the future of the large-mass expansion and lists some unresolved issues in lattice QED.
|
16 |
Further assessment of the LET theoryFilipiak, Mark January 1992 (has links)
This thesis extends and analyses the Local Energy Transfer (LET) approximation for turbulence. LET is a two-point two-time second moment closure for the Navier-Stokes equations, developed using renormalised perturbation theory in an Eulerian coordinate system. Analytical and numerical calculations of LET for the velocity field have been made in previous work. The LET approximation is extended to treat the transport of a passive scalar. The LET equations for passive scalar transport are derived and used in numerical calculations at a range of Reynolds and Prandtl numbers. The evolution in time of the scalar energy, dissipation and transfer spectra is calculated, and these spectra are shown to become self-similar under convective or Kolmogorov scaling. The scalar energy, dissipation and transfer spectra at <i>R</i><SUB>λ im</SUB> 40 compare well with experiment. The two-time scalar correlation is calculated and the relevant scaling for the time separation is shown to be convective at small Reynolds number and Kolmogorov (i.e. inertial) at large Reynolds number. The effect of the ratio of the velocity energy spectrum peak wavenumber to the scalar energy peak wavenumber on the thermal to mechanical time-scale ratio is compared with experiment. At large Reynolds number the scalar energy spectrum is shown to have a <i>k</i><SUP>-5/3</SUP> inertial-convective range at <i>Pr</i> = 0.5, with a value of 1.13 for the Obukhov-Corrsin constant β. The scalar energy balance is calculated at several Reynolds numbers and at large Reynolds number shows a clear separation in wavenumber of the production (in fact the energy peak in decaying turbulence) and dissipation ranges. The dependence of the velocity-scalar cross derivative skewness on the Reynolds and Prandtl numbers is compared with direct numerical simulation and experiment. The magnitude and the Reynolds number dependence of the skewness is in fair agreement with the simulation, but the Prandtl number dependence is reversed. The Galilean transformation properties of the Navier-Stokes equations, velocity moment equations, perturbation expansion and LET are investigated. The perturbation expansion (used to derive LET) is shown to be invariant under a Galilean transformation, term by term, thus any truncation will be Galilean invariant. The LET equations are also shown to be Galilean invariant. The concept of Random Galilean Transformation (RGT) is analysed. The RGT was developed by Kraichnan to model the convective effects of the large scales in turbulence. Invariance under a RGT is violated by Eulerian renormalised perturbation theories - this led to the development of quasi-Lagrangian theories. The RGT is shown to be a change of ensemble rather than a symmetry transformation. This change of ensemble makes the derivation of an Eulerian renormalised perturbation theory impossible as the zero-order solution is no longer Gaussian and the zero-order propagator/response function becomes a random variable.
|
17 |
Efficient global optimization : analysis, generalizations and extensionsMayer, Theresia January 2003 (has links)
In some optimization problems the evaluation of the objective function is very expensive. It is therefore desirable to find the global optimum of the function with only comparatively few function evaluations. Because of the expense of evaluations it is justified to put significant effort into finding good sample points and using all the available information about the objective function. One way of achieving this is by assuming that the function can be modelled as a stochastic process and fitting a response surface to it, based on function evaluations at a set of points determined by an initial design. Parameters in the model are estimated when fitting the response surface to the available data. In determining the next point at which to evaluate the objective function, a balance must be struck between local search and global search. Local search in a neighbourhood of the minimum of the approximating function has the aim of finding a point with improved objective value. The aim of global search is to improve the approximation by maximizing an error function which reflects the uncertainty in the approximating function. Such a balance is achieved by using the expected improvement criterion. In this approach the next sample point is chosen where the expected improvement is maximized. The expected improvement at any point in the range reflects the expected amount of improvement of the approximating function beyond a target value (usually the best function value found up to this point) at that point, taking into account the uncertainty in the approximating function. In this thesis, we present and examine the expected improvement approach and the maximization of the expected improvement function.
|
18 |
Using modal logic proofs to test implementation-specification relationsPaxton, Alan January 2000 (has links)
This thesis shows how to make use of the intensional information relating specifications to implementations. It views the proofs of properties of specifications as identifying the intensional parts of implementations relevant to the property. It provides a concrete instance of such proofs by adopting labelled transition systems, modal-mu calculus and the tableau methods of Stirling and Bradfield as a framework for generating intensional information. The intensional information generated from proofs about models of systems can be used to verify behaviours of implementations of systems. By annotating implementations of systems with the atomic actions of their models we can apply oracle technique to verifying implementation behaviour. The extra richness of intensional information allows oracles derived from proofs, rather than just from properties, to be much more discriminating of failures in the implementation. The emphasis of oracle-based testing and verification is on practical improvements in the quality of distributed systems. Therefore the intensional idea is developed into a framework for a practical system. Case study systems are examined to identify where system developers can be helped by computerised systems to integrate auditioning into the software development process.
|
19 |
Operator theory on Cp spacesLioudaki, Vasiliki January 2004 (has links)
No description available.
|
20 |
A numerical method for perturbative QCD calculationsRamtohul, Mark Anthony Sookraj January 2004 (has links)
Standard methods for performing analytic perturbative calculations for the process of <i>e</i><sup>+</sup><i>e</i><sup>-</sup> → <i>q? </i>up to ?(<i>α<sub>s</sub></i>) are explained and results given. An emphasis is given to the organisation calculations using the Cutkosky cutting rules and the renormalisation of the massive quark propagator. Methods for numerical integration are presented including those used in VEGAS. The numerical methods used in the Beowulf program for calculating infra-red safe observables for jet events from electron-positron collisions are also explained. Cancellations of singularities required for numerical calculations are demonstrated using an example in ?<sup>3</sup> theory both numerically and graphically. Renormalisation by subtraction of appropriate integrals is also covered. Adaptations of the Beowulf procedure required for the inclusion of massive fermions are developed and explained. An alternative method for including the quark self energy and its related cuts using scalar decomposition, numerically equivalent integrals and its spinor structure is introduced. The methods are used to calculate the ?(<i>α<sub>s</sub></i>) corrections to the process <i>e</i><sup>+</sup><i>e</i><sup>-</sup> → <i>q? </i>using VEGAS. Drawbacks of the smearing function required in the numerical integration due to the corrections dependence on the mass and centre of mass energy are discussed. Results of the ?(<i>α<sub>s</sub></i>) cross section using the numerical method verify the procedure. The method was then used to see the effects of mass on the thrust distribution and when using the Durham and JADE jet algorithms.
|
Page generated in 0.0375 seconds