• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 190
  • 189
  • 61
  • 24
  • 11
  • 11
  • 9
  • 9
  • 7
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • Tagged with
  • 612
  • 612
  • 152
  • 136
  • 119
  • 108
  • 72
  • 68
  • 67
  • 65
  • 60
  • 59
  • 53
  • 47
  • 46
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
51

Analysis of magnetoencephalographic data as a nonlinear dynamical system

Woon, Wei Lee January 2002 (has links)
This thesis presents the results from an investigation into the merits of analysing Magnetoencephalographic (MEG) data in the context of dynamical systems theory. MEG is the study of both the methods for the measurement of minute magnetic flux variations at the scalp, resulting from neuro-electric activity in the neocortex, as well as the techniques required to process and extract useful information from these measurements. As a result of its unique mode of action - by directly measuring neuronal activity via the resulting magnetic field fluctuations - MEG possesses a number of useful qualities which could potentially make it a powerful addition to any brain researcher's arsenal. Unfortunately, MEG research has so far failed to fulfil its early promise, being hindered in its progress by a variety of factors. Conventionally, the analysis of MEG has been dominated by the search for activity in certain spectral bands - the so-called alpha, delta, beta, etc that are commonly referred to in both academic and lay publications. Other efforts have centred upon generating optimal fits of "equivalent current dipoles" that best explain the observed field distribution. Many of these approaches carry the implicit assumption that the dynamics which result in the observed time series are linear. This is despite a variety of reasons which suggest that nonlinearity might be present in MEG recordings. By using methods that allow for nonlinear dynamics, the research described in this thesis avoids these restrictive linearity assumptions. A crucial concept underpinning this project is the belief that MEG recordings are mere observations of the evolution of the true underlying state, which is unobservable and is assumed to reflect some abstract brain cognitive state. Further, we maintain that it is unreasonable to expect these processes to be adequately described in the traditional way: as a linear sum of a large number of frequency generators. One of the main objectives of this thesis will be to prove that much more effective and powerful analysis of MEG can be achieved if one were to assume the presence of both linear and nonlinear characteristics from the outset. Our position is that the combined action of a relatively small number of these generators, coupled with external and dynamic noise sources, is more than sufficient to account for the complexity observed in the MEG recordings. Another problem that has plagued MEG researchers is the extremely low signal to noise ratios that are obtained. As the magnetic flux variations resulting from actual cortical processes can be extremely minute, the measuring devices used in MEG are, necessarily, extremely sensitive. The unfortunate side-effect of this is that even commonplace phenomena such as the earth's geomagnetic field can easily swamp signals of interest. This problem is commonly addressed by averaging over a large number of recordings. However, this has a number of notable drawbacks. In particular, it is difficult to synchronise high frequency activity which might be of interest, and often these signals will be cancelled out by the averaging process. Other problems that have been encountered are high costs and low portability of state-of-the- art multichannel machines. The result of this is that the use of MEG has, hitherto, been restricted to large institutions which are able to afford the high costs associated with the procurement and maintenance of these machines. In this project, we seek to address these issues by working almost exclusively with single channel, unaveraged MEG data. We demonstrate the applicability of a variety of methods originating from the fields of signal processing, dynamical systems, information theory and neural networks, to the analysis of MEG data. It is noteworthy that while modern signal processing tools such as independent component analysis, topographic maps and latent variable modelling have enjoyed extensive success in a variety of research areas from financial time series modelling to the analysis of sun spot activity, their use in MEG analysis has thus far been extremely limited. It is hoped that this work will help to remedy this oversight.
52

Pragmatic algorithms for implementing geostatistics with large datasets

Ingram, Benjamin R. January 2008 (has links)
With the ability to collect and store increasingly large datasets on modern computers comes the need to be able to process the data in a way that can be useful to a Geostatistician or application scientist. Although the storage requirements only scale linearly with the number of observations in the dataset, the computational complexity in terms of memory and speed, scale quadratically and cubically respectively for likelihood-based Geostatistics. Various methods have been proposed and are extensively used in an attempt to overcome these complexity issues. This thesis introduces a number of principled techniques for treating large datasets with an emphasis on three main areas: reduced complexity covariance matrices, sparsity in the covariance matrix and parallel algorithms for distributed computation. These techniques are presented individually, but it is also shown how they can be combined to produce techniques for further improving computational efficiency.
53

Non-linear hierarchical visualisation

Sun, Yi January 2002 (has links)
This thesis applies a hierarchical latent trait model system to a large quantity of data. The motivation for it was lack of viable approaches to analyse High Throughput Screening datasets which maybe include thousands of data points with high dimensions. We believe that a latent variable model (LTM) with a non-linear mapping from the latent space to the data space is a preferred choice for visualising a complex high-dimensional data set. As a type of latent variable model, the latent trait model can deal with either continuous data or discrete data, which makes it particularly useful in this domain. In addition, with the aid of differential geometry, we can imagine that distribution of data from magnification factor and curvature plots. Rather than obtaining the useful information just from a single plot, a hierarchical LTM arranges a set of LTMs and their corresponding plots in a tree structure. We model the whole data set with a LTM at the top level, which is broken down into clusters at deeper levels of the hierarchy. In this manner, the refined visualisation plots can be displayed in deeper levels and sub-clusters may be found. Hierarchy of LTMs is trained using expectation-maximisation (EM) algorithm to maximise its likelihood with respect to the data sample. Training proceeds interactively in a recursive fashion (top-down). The user subjectively identifies interesting regions on the visualisation plot that they would like to model in a greater detail. At each stage of hierarchical LTM construction, the EM algorithm alternates between the E - and M - step. Another problem that can occur when visualising a large data set is that there may be significant overlaps of data clusters. It is very difficult for the user to judge where centres of regions of interest should be put. We address this problem by employing the minimum message length technique, which can help the user to decide the optimal structure of the model.
54

Modelling nonlinear stochastic dynamics in financial time series

Lesch, Ragnar H. January 2000 (has links)
For analysing financial time series two main opposing viewpoints exist, either capital markets are completely stochastic and therefore prices follow a random walk, or they are deterministic and consequently predictable. For each of these views a great variety of tools exist with which it can be tried to confirm the hypotheses. Unfortunately, these methods are not well suited for dealing with data characterised in part by both paradigms. This thesis investigates these two approaches in order to model the behaviour of financial time series. In the deterministic framework methods are used to characterise the dimensionality of embedded financial data. The stochastic approach includes here an estimation of the unconditioned and conditional return distributions using parametric, non- and semi-parametric density estimation techniques. Finally, it will be shown how elements from these two approaches could be combined to achieve a more realistic model for financial time series.
55

Investigating viscous fluid flow in an internal mixer using computational fluid dynamics

Harries, Alun M. January 2000 (has links)
This thesis presents an effective methodology for the generation of a simulation which can be used to increase the understanding of viscous fluid processing equipment and aid in their development, design and optimisation. The Hampden RAPRA Torque Rheometer internal batch twin rotor mixer has been simulated with a view to establishing model accuracies, limitations, practicalities and uses. As this research progressed, via the analyses several 'snap-shot' analysis of several rotor configurations using the commercial code Polyflow, it was evident that the model was of some worth and its predictions are in good agreement with the validation experiments, however, several major restrictions were identified. These included poor element form, high man-hour requirements for the construction of each geometry and the absence of the transient term in these models. All, or at least some, of these limitations apply to the numerous attempts to model internal mixes by other researchers and it was clear that there was no generally accepted methodology to provide a practical three-dimensional model which has been adequately validated. This research, unlike others, presents a full complex three-dimensional, transient, non-isothermal, generalised non-Newtonian simulation with wall slip which overcomes these limitations using unmatched ridding and sliding mesh technology adapted from CFX codes. This method yields good element form and, since only one geometry has to be constructed to represent the entire rotor cycle, is extremely beneficial for detailed flow field analysis when used in conjunction with user defined programmes and automatic geometry parameterisation (AGP), and improves accuracy for investigating equipment design and operation conditions. Model validation has been identified as an area which has been neglected by other researchers in this field, especially for time dependent geometries, and has been rigorously pursued in terms of qualitative and quantitative velocity vector analysis of the isothermal, full fill mixing of generalised non-Newtonian fluids, as well as torque comparison, with a relatively high degree of success. This indicates that CFD models of this type can be accurate and perhaps have not been validated to this extent previously because of the inherent difficulties arising from most real processes.
56

Digital image watermarking

Bounkong, Stephane January 2004 (has links)
In recent years, interest in digital watermarking has grown significantly. Indeed, the use of digital watermarking techniques is seen as a promising mean to protect intellectual property rights of digital data and to ensure the authentication of digital data. Thus, a significant research effort has been devoted to the study of practical watermarking systems, in particular for digital images. In this thesis, a practical and principled approach to the problem is adopted. Several aspects of practical watermarking schemes are investigated. First, a power constaint formulation of the problem is presented. Then, a new analysis of quantisation effects on the information rate of digital watermarking scheme is proposed and compared to other approaches suggested in the literature. Subsequently, a new information embedding technique, based on quantisation, is put forward and its performance evaluated. Finally, the influence of image data representation on the performance of practical scheme is studied along with a new representation based on independent component analysis.
57

Determinants of Active Pursuit of Kidney Donation: Applying the Theory of Motivated Information Management

West, Stacy M 01 January 2016 (has links)
End stage renal disease (ESRD) is a growing epidemic impacting the United States. While the optimal treatment for ESRD is renal replacement, barriers exist making this treatment difficult and sometimes impossible for patients to pursue. One potential solution to existing barriers is to encourage patients to actively seek living donors. This is an inherently communicative and social process. The Theory of Motivated Information Management (TMIM) offers a framework for understanding factors that contribute to patients’ conversations about transplantation with their social networks. It is also possible that Patient Empowerment can add to this model, and inform future patient education. Specific variables related to the TMIM and Patient Empowerment are analyzed in bivariate and logistic regression analyses. Variables that were significant in bivariate analysis did not rise to the level of significance when included in a full logistic regression analysis. Study results and outcomes suggest that further research is warranted.
58

Collecting and interpreting human skulls and hair in late Nineteenth Century London : passing fables & comparative readings at The Wildgoose Memorial Library : an artist's response to the DCMS Guidance for the Care of Human Remains in Museums (2005)

Wildgoose, Jane January 2015 (has links)
This practice-based doctoral research project is an artist’s response to the ‘unique status’ ascribed to human remains in the DCMS Guidance for the Care of Human Remains in Museums (2005): as objects, in scientific, medical/anthropological contexts, or subjects, which may be understood in associative, symbolic and/or emotional ways. It is concerned with the circumstances in which human remains were collected and interpreted in the past, and with the legacies of historical practice regarding their presence in museum collections today. Overall, it aims to contribute to public engagement concerning these issues. Taking the form of a Comparative Study the project focuses on the late nineteenth century, when human skulls were collected in great numbers for comparative anatomical and anthropological research, while in wider society the fashion for incorporating human hair into mourning artefacts became ubiquitous following the death of Prince Albert in 1861. William Henry Flower’s craniological work at the Hunterian Museum of the Royal College of Surgeons of England, where he amassed a vast collection of human skulls that he interpreted according to theories of racial “type” (in which hair was identified as an important distinguishing characteristic), is investigated, and its legacy reviewed. His scientific objectification of human remains is presented for comparison, in parallel, with the emotional and associative significance popularly attributed to mourning hairwork, evidenced in accompanying documentation, contemporary diaries, literature, and hairworkers’ manuals. Combining inter-related historical, archival- and object-based research with subjective and intuitive elements in my practice, a synthesis of the artistic and academic is developed in the production of a new “archive” of The Wildgoose Memorial Library - my collection of found and made objects, photographs, documents and books that takes a central place in my practice. Victorian hairworking skills are researched, and a new piece of commemorative hairwork devised and made as the focus for a site-specific presentation of this archive at the Crypt Gallery St. Pancras, in which a new approach to public engagement is implemented and tested, concerning the legacy and special status of human remains in museum collections today.
59

3D interactive technology and the museum visitor experience

Smith, M. January 2015 (has links)
There is a growing interest in developing systems for displaying museum artefacts as well as historic buildings and materials. This work connects with this interest by creating a 3D interactive display for Fishbourne Roman Palace Museum, West Sussex, England. The research aimed to create a reconstruction of the Palace as it would have been at its height, a reconstruction that was interactive in the sense that museum visitors would be able to walk through the buildings and local grounds and experience the site in a way not possible through traditional museum displays. The inclusion of the interactive element prompted the incorporation of game engines as a means of visualising and navigating around the reconstructed 3D model of the Palace. There are numerous game engines available, and the research evaluated a selection with respect to their functionality, cost, and ease of use. It also applied a technology readiness method to assess potential users’ response to the incorporation of different degrees of interactivity. Research was undertaken regarding the appearance of the Palace and, based on the available archaeology and relevant artistic interpretations, a model was created using Autodesk Maya software. This model was exported into each of the possible game engines, and a comparison was made based of each engine’s audio, visual, and functional fidelity, as well as composability and accessibility. The most appropriate engine is chosen based on these results. With reference to the assessment criteria, the hardware and software is in preparation for installation at the Fishbourne Roman Palace Museum. The Technology Readiness Index was applied to determine the effectiveness of such a display compared to a non-interactive representation, a study that concludes that a highly interactive display may not be the most sensible solution for the majority of visitors.
60

Exploiting uncertainty in nonlinear stochastic control problem

Herzallah, Randa January 2003 (has links)
This work introduces a novel inversion-based neurocontroller for solving control problems involving uncertain nonlinear systems which could also compensate for multi-valued systems. The approach uses recent developments in neural networks, especially in the context of modelling statistical distributions, which are applied to forward and inverse plant models. Provided that certain conditions are met, an estimate of the intrinsic uncertainty for the outputs of neural networks can be obtained using the statistical properties of networks. More generally, multicomponent distributions can be modelled by the mixture density network. Based on importance sampling from these distributions a novel robust inverse control approach is obtained. This importance sampling provides a structured and principled approach to constrain the complexity of the search space for the ideal control law. The developed methodology circumvents the dynamic programming problem by using the predicted neural network uncertainty to localise the possible control solutions to consider. Convergence of the output error for the proposed control method is verified by using a Lyapunov function. Several simulation examples are provided to demonstrate the efficiency of the developed control method. The manner in which such a method is extended to nonlinear multi-variable systems with different delays between the input-output pairs is considered and demonstrated through simulation examples.

Page generated in 0.0765 seconds