• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2
  • Tagged with
  • 69
  • 69
  • 69
  • 69
  • 26
  • 22
  • 19
  • 18
  • 15
  • 13
  • 12
  • 11
  • 10
  • 10
  • 9
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Mathematical solutions to problems in radiological protection involving air sampling and biokinetic modelling

Birchall, Alan January 1998 (has links)
Intakes of radionuclides are estimated with the personal air sampler (PAS) and by biological monitoring techniques: in the case of plutonium, there are problems with both methods. The statistical variation in activity collected when sampling radioactive aerosols with low number concentrations was investigated. By treating man as an ideal sampler, an analytical expression was developed for the probability distribution of intake following a single measurement on a PAS. The dependence on aerosol size, specific activity and density was investigated. The methods were extended to apply to routine monitoring procedures for plutonium. Simple algebraic approximations were developed to give the probability of exceeding estimated intakes and doses by given factors. The conditions were defined under which PAS monitoring meets the ICRP definition of adequacy. It was shown that the PAS is barely adequate for monitoring plutonium at ALl levels in typical workplaceconditions. Two algorithms were developed, enabling non-recycling and recycling compartmental models to be solved. Their accuracy and speed were investigated, and methods of dealing with partitioning, continuous intake, and radioactive progeny were discussed. Analytical, rather than numerical, methods were used. These are faster, and thus ideally suited for implementation on microcomputers. The algorithms enable non-specialists to solve quickly and easily any first order compartmental model, including all the ICRP metabolic models. Nonrecycling models with up to 50 compartments can be solved in seconds: recycling models take a little longer. A biokinetic model for plutonium in man following systemic uptake was developed. The proposed ICRP lung model (1989) was represented by a first order compartmental model. These two models were combined, and the recycling algorithm was used to calculate urinary and faecal excretion of plutonium following acute or chronic intake by inhalation. The results indicate much lower urinary excretion than predicted by ICRP Publication 54.
2

COUNTING CLOSED GEODESICS IN ORBIT CLOSURES

John Abou-Rached (15305485) 17 April 2023 (has links)
<p>The moduli space of Abelian differentials on Riemann surfaces admits a natural action by $\mathrm{SL}\left(2,\mathbb{R}\right)$.  This thesis is concerned with using the classification of invariant measures for this action due to Eskin and Mirzakhani, to study the growth of closed geodesics in the support of an invariant measure coming from the closure of an orbit for the $\mathrm{SL}\left(2,\mathbb{R}\right)$-action. These are always subvarieties of moduli space. For $0 \leq \theta \leq 1$, we obtain an exponential bound on the number of closed geodesics in the orbit closure, of length at most $R$, that have at least $\theta$-fraction of their length in a region with short saddle connections.</p>
3

THEORY AND APPLICATIONS OF DATA SCIENCE

Sheng Zhang (13900074) 07 October 2022 (has links)
<p>This work is a collection of original research, contributing to various hot topics in contemporary data science, covering both theory and applications. The topics include discovering physical laws from data, data-driven epidemiological models, Gaussian random field surrogate models, and image texture classification. In Chapter 2, we introduce a novel method for discovering physical laws from data with uncertainty quantification. In Chapter 3, this method is enhanced to tackle high noise and outliers. In Chapter 4, the method is applied to discover the law of turbine component damage in industry. In Chapter 5, we propose a new framework for building trustworthy data-driven epidemiological models and apply it to the COVID-19 outbreak in New York City. In Chapter 6, we construct augmented Gaussian random field, a universal framework incorporating the data of observable and derivatives of any order. The theoretical framework as well as computational framework are established. In Chapter 7, we introduce the use of 2-dimensional signature, an object inspired by rough paths theory, as feature for image texture classification.</p>
4

Local exact controllability to the trajectories of the liquid crystal flow and global null controllability of the liquid crystal flow with an external field

Yinzhen Li (17584263) 09 December 2023 (has links)
<p dir="ltr">This dissertation encompasses my research work during my Ph.D. career, focusing on the controllability properties of liquid crystal flow. I have achieved two main results, which are as follows: Firstly, I have established the local exact controllability to the trajectory of the simplified Ericksen-Leslie model and its Ginzburg-Landau approximation. Secondly, I have successfully proven the global null controllability of the simplified Ericksen-Leslie model with Lions boundary condition for u and Neumann boundary condition for d, with the aid of a globally defined magnetic field which is independent of the spatial variable x. </p>
5

Analysis of magnetoencephalographic data as a nonlinear dynamical system

Woon, Wei Lee January 2002 (has links)
This thesis presents the results from an investigation into the merits of analysing Magnetoencephalographic (MEG) data in the context of dynamical systems theory. MEG is the study of both the methods for the measurement of minute magnetic flux variations at the scalp, resulting from neuro-electric activity in the neocortex, as well as the techniques required to process and extract useful information from these measurements. As a result of its unique mode of action - by directly measuring neuronal activity via the resulting magnetic field fluctuations - MEG possesses a number of useful qualities which could potentially make it a powerful addition to any brain researcher's arsenal. Unfortunately, MEG research has so far failed to fulfil its early promise, being hindered in its progress by a variety of factors. Conventionally, the analysis of MEG has been dominated by the search for activity in certain spectral bands - the so-called alpha, delta, beta, etc that are commonly referred to in both academic and lay publications. Other efforts have centred upon generating optimal fits of "equivalent current dipoles" that best explain the observed field distribution. Many of these approaches carry the implicit assumption that the dynamics which result in the observed time series are linear. This is despite a variety of reasons which suggest that nonlinearity might be present in MEG recordings. By using methods that allow for nonlinear dynamics, the research described in this thesis avoids these restrictive linearity assumptions. A crucial concept underpinning this project is the belief that MEG recordings are mere observations of the evolution of the true underlying state, which is unobservable and is assumed to reflect some abstract brain cognitive state. Further, we maintain that it is unreasonable to expect these processes to be adequately described in the traditional way: as a linear sum of a large number of frequency generators. One of the main objectives of this thesis will be to prove that much more effective and powerful analysis of MEG can be achieved if one were to assume the presence of both linear and nonlinear characteristics from the outset. Our position is that the combined action of a relatively small number of these generators, coupled with external and dynamic noise sources, is more than sufficient to account for the complexity observed in the MEG recordings. Another problem that has plagued MEG researchers is the extremely low signal to noise ratios that are obtained. As the magnetic flux variations resulting from actual cortical processes can be extremely minute, the measuring devices used in MEG are, necessarily, extremely sensitive. The unfortunate side-effect of this is that even commonplace phenomena such as the earth's geomagnetic field can easily swamp signals of interest. This problem is commonly addressed by averaging over a large number of recordings. However, this has a number of notable drawbacks. In particular, it is difficult to synchronise high frequency activity which might be of interest, and often these signals will be cancelled out by the averaging process. Other problems that have been encountered are high costs and low portability of state-of-the- art multichannel machines. The result of this is that the use of MEG has, hitherto, been restricted to large institutions which are able to afford the high costs associated with the procurement and maintenance of these machines. In this project, we seek to address these issues by working almost exclusively with single channel, unaveraged MEG data. We demonstrate the applicability of a variety of methods originating from the fields of signal processing, dynamical systems, information theory and neural networks, to the analysis of MEG data. It is noteworthy that while modern signal processing tools such as independent component analysis, topographic maps and latent variable modelling have enjoyed extensive success in a variety of research areas from financial time series modelling to the analysis of sun spot activity, their use in MEG analysis has thus far been extremely limited. It is hoped that this work will help to remedy this oversight.
6

Pragmatic algorithms for implementing geostatistics with large datasets

Ingram, Benjamin R. January 2008 (has links)
With the ability to collect and store increasingly large datasets on modern computers comes the need to be able to process the data in a way that can be useful to a Geostatistician or application scientist. Although the storage requirements only scale linearly with the number of observations in the dataset, the computational complexity in terms of memory and speed, scale quadratically and cubically respectively for likelihood-based Geostatistics. Various methods have been proposed and are extensively used in an attempt to overcome these complexity issues. This thesis introduces a number of principled techniques for treating large datasets with an emphasis on three main areas: reduced complexity covariance matrices, sparsity in the covariance matrix and parallel algorithms for distributed computation. These techniques are presented individually, but it is also shown how they can be combined to produce techniques for further improving computational efficiency.
7

Non-linear hierarchical visualisation

Sun, Yi January 2002 (has links)
This thesis applies a hierarchical latent trait model system to a large quantity of data. The motivation for it was lack of viable approaches to analyse High Throughput Screening datasets which maybe include thousands of data points with high dimensions. We believe that a latent variable model (LTM) with a non-linear mapping from the latent space to the data space is a preferred choice for visualising a complex high-dimensional data set. As a type of latent variable model, the latent trait model can deal with either continuous data or discrete data, which makes it particularly useful in this domain. In addition, with the aid of differential geometry, we can imagine that distribution of data from magnification factor and curvature plots. Rather than obtaining the useful information just from a single plot, a hierarchical LTM arranges a set of LTMs and their corresponding plots in a tree structure. We model the whole data set with a LTM at the top level, which is broken down into clusters at deeper levels of the hierarchy. In this manner, the refined visualisation plots can be displayed in deeper levels and sub-clusters may be found. Hierarchy of LTMs is trained using expectation-maximisation (EM) algorithm to maximise its likelihood with respect to the data sample. Training proceeds interactively in a recursive fashion (top-down). The user subjectively identifies interesting regions on the visualisation plot that they would like to model in a greater detail. At each stage of hierarchical LTM construction, the EM algorithm alternates between the E - and M - step. Another problem that can occur when visualising a large data set is that there may be significant overlaps of data clusters. It is very difficult for the user to judge where centres of regions of interest should be put. We address this problem by employing the minimum message length technique, which can help the user to decide the optimal structure of the model.
8

Modelling nonlinear stochastic dynamics in financial time series

Lesch, Ragnar H. January 2000 (has links)
For analysing financial time series two main opposing viewpoints exist, either capital markets are completely stochastic and therefore prices follow a random walk, or they are deterministic and consequently predictable. For each of these views a great variety of tools exist with which it can be tried to confirm the hypotheses. Unfortunately, these methods are not well suited for dealing with data characterised in part by both paradigms. This thesis investigates these two approaches in order to model the behaviour of financial time series. In the deterministic framework methods are used to characterise the dimensionality of embedded financial data. The stochastic approach includes here an estimation of the unconditioned and conditional return distributions using parametric, non- and semi-parametric density estimation techniques. Finally, it will be shown how elements from these two approaches could be combined to achieve a more realistic model for financial time series.
9

Investigating viscous fluid flow in an internal mixer using computational fluid dynamics

Harries, Alun M. January 2000 (has links)
This thesis presents an effective methodology for the generation of a simulation which can be used to increase the understanding of viscous fluid processing equipment and aid in their development, design and optimisation. The Hampden RAPRA Torque Rheometer internal batch twin rotor mixer has been simulated with a view to establishing model accuracies, limitations, practicalities and uses. As this research progressed, via the analyses several 'snap-shot' analysis of several rotor configurations using the commercial code Polyflow, it was evident that the model was of some worth and its predictions are in good agreement with the validation experiments, however, several major restrictions were identified. These included poor element form, high man-hour requirements for the construction of each geometry and the absence of the transient term in these models. All, or at least some, of these limitations apply to the numerous attempts to model internal mixes by other researchers and it was clear that there was no generally accepted methodology to provide a practical three-dimensional model which has been adequately validated. This research, unlike others, presents a full complex three-dimensional, transient, non-isothermal, generalised non-Newtonian simulation with wall slip which overcomes these limitations using unmatched ridding and sliding mesh technology adapted from CFX codes. This method yields good element form and, since only one geometry has to be constructed to represent the entire rotor cycle, is extremely beneficial for detailed flow field analysis when used in conjunction with user defined programmes and automatic geometry parameterisation (AGP), and improves accuracy for investigating equipment design and operation conditions. Model validation has been identified as an area which has been neglected by other researchers in this field, especially for time dependent geometries, and has been rigorously pursued in terms of qualitative and quantitative velocity vector analysis of the isothermal, full fill mixing of generalised non-Newtonian fluids, as well as torque comparison, with a relatively high degree of success. This indicates that CFD models of this type can be accurate and perhaps have not been validated to this extent previously because of the inherent difficulties arising from most real processes.
10

Digital image watermarking

Bounkong, Stephane January 2004 (has links)
In recent years, interest in digital watermarking has grown significantly. Indeed, the use of digital watermarking techniques is seen as a promising mean to protect intellectual property rights of digital data and to ensure the authentication of digital data. Thus, a significant research effort has been devoted to the study of practical watermarking systems, in particular for digital images. In this thesis, a practical and principled approach to the problem is adopted. Several aspects of practical watermarking schemes are investigated. First, a power constaint formulation of the problem is presented. Then, a new analysis of quantisation effects on the information rate of digital watermarking scheme is proposed and compared to other approaches suggested in the literature. Subsequently, a new information embedding technique, based on quantisation, is put forward and its performance evaluated. Finally, the influence of image data representation on the performance of practical scheme is studied along with a new representation based on independent component analysis.

Page generated in 0.1276 seconds