• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 64
  • 22
  • 9
  • 7
  • 3
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 132
  • 27
  • 26
  • 20
  • 17
  • 16
  • 16
  • 13
  • 13
  • 13
  • 13
  • 12
  • 12
  • 11
  • 10
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

Parallel MCMC methods and their applications in inverse problems

Russell, Paul January 2018 (has links)
In this thesis we introduce a framework for parallel MCMC methods which we call parallel adaptive importance sampling (PAIS). At each iteration we have an ensemble of particles, from which PAIS builds a kernel density estimate (KDE). We propose a new ensemble, using this KDE, that is weighted according to standard importance sampling rules. A state-of-the art resampling method from the optimal transportation literature, or alternatively our own novel resampling algorithm, can be used to produce an equally weighted ensemble from this weighted ensemble. This equally weighted ensemble is approximately distributed according to the target distribution and is used to progress the algorithm. The PAIS algorithm outputs a weighted sample. We introduce an adaptive scheme for PAIS which automatically tunes the scaling parameters required for efficient sampling. This adaptive tuning converges rapidly for the target distributions we have experimented with and significantly reduces the burn-in period of the algorithm. PAIS has been designed to work well on computers with parallel processing units available, and we have demonstrated that a doubling of the number of processing units available more than halves the number of iterations required to reach the same accuracy. The numerical examples have been implemented on a shared memory system. PAIS is incredibly flexible in terms of the proposal distributions and resampling methods we can use. Throughout the thesis we introduce a number of these proposal schemes, and highlight when they may be of use. Of particular interest is the transport map based proposal scheme introduced in Chapter 7 which, while more expensive than the other schemes, allows us to sample efficiently from a wide range of complex target distributions.
42

A New Method Of Resampling Testing Nonparametric Hypotheses: Balanced Randomization Tests

January 2014 (has links)
Background: Resampling methods such as the Monte Carlo (MC) and Bootstrap Approach (BA) are very flexible tools for statistical inference. They are used in general in experiments with small sample size or where the parametric test assumptions are not met. They are also used in situations where expressions for properties of complex estimators are statistically intractable. However, the MC and BA methods require relatively large random samples to estimate the parameters of the full permutation (FP) or exact distribution. Objective: The objective of this research study was to develop an efficient statistical computational resampling method that compares two population parameters, using a balanced and controlled sampling design. The application of the new method, the balanced randomization (BR) method, is discussed using microarray data where sample sizes are generally small. Methods: Multiple datasets were simulated from real data to compare the accuracy and efficiency of the methods (BR, MC, and BA). Datasets, probability distributions, parameters, and sample sizes were varied in the simulation. The correlation between the exact p-value and the p-values generated by simulation provide a measure of accuracy/consistency to compare methods. Sensitivity, specificity, power function, false negative and positive rates using graphical and multivariate analyses were used to compare methods. Results and Discussions: The correlation between the exact p-value and those estimated from simulation are higher for BR and MC, (increasing somewhat with increasing sample size), much less for BA, and most pronounced for skewed distributions (lognormal, exponential). Furthermore, the relative proportion of 95%/99% CI containing the true p-value for BR vs. MC=3%/1.3% (p<0.0001) and BR vs. BA=20%/15% (p<0.0001). The sensitivity, specificity and power function of the BR method were shown to have a slight advantage compared to those of MC and BA in most situations. As an example, the BR method was applied to a microarray study to discuss significantly differentially expressed genes. / acase@tulane.edu
43

A resampling theory for non-bandlimited signals and its applications : a thesis presented for the partial fulfillment of the requirements for the degree of Doctor of Philosophy in Engineering at Massey University, Wellington, New Zealand

Huang, Beilei January 2008 (has links)
Currently, digital signal processing systems typically assume that the signals are bandlimited. This is due to our knowledge based on the uniform sampling theorem for bandlimited signals which was established over 50 years ago by the works of Whittaker, Kotel'nikov and Shannon. However, in practice the digital signals are mostly of finite length. This kind of signals are not strictly bandlimited. Furthermore, advances in electronics have led to the use of very wide bandwidth signals and systems, such as Ultra-Wide Band (UWB) communication systems with signal bandwidths of several giga-hertz. This kind of signals can effectively be viewed as having infinite bandwidth. Thus there is a need to extend existing theory and techniques for signals of finite bandwidths to that for non-bandlimited signals. Two recent approaches to a more general sampling theory for non-bandlimited signals have been published. One is for signals with finite rate of innovation. The other introduced the concept of consistent sampling. It views sampling and reconstruction as projections of signals onto subspaces spanned by the sampling (acquisition) and reconstruction (synthesis) functions. Consistent sampling is achieved if the same discrete signal is obtained when the reconstructed continuous signal is sampled. However, it has been shown that when this generalized theory is applied to the de-interlacing of video signals, incorrect results are obtained. This is because de-interlacing is essentially a resampling problem rather than a sampling problem because both the input and output are discrete. While the theory for the resampling for bandlimited signals is well established, the problem of resampling without bandlimited constraints is largely unexplored. The aim of this thesis is to develop a resampling theory for non-bandlimited discrete signals and explore some of its potential applications. The first major contribution is the the theory and techniques for designing an optimal resampling system for signals in the general Hilbert Space when noise is not present. The system is optimal in the sense that the input of the system can always be obtained from the output. The theory is based on the concept of consistent resampling which means that the same continuous signal will be obtained when either the original or the resampled discrete signal is presented to the reconstruction filter. While comparing the input and output of a sampling/reconstruction system is relatively simple since both are continuous signals, comparing the discrete input and output of a resampling system is not. The second major contribution of this thesis is the proposal of a metric that allows us to evaluate the performance of a resampling system. The performance is analyzed in the Fourier domain as well. This performance metric also provides a way by which different resampling algorithms can be compared effectively. It therefore facilitates the process of choosing proper resampling schemes for a particular purpose. Unfortunately consistent resampling cannot always be achieved if noise is present in the signal or the system. Based on the performance metric proposed, the third major contribution of this thesis is the development of procedures for designing resampling systems in the presence of noise which is optimal in the mean squared error (MSE) sense. Both discrete and continuous noise are considered. The problem is formulated as a semi-definite program which can be solved effciently by existing techniques. The usefulness and correctness of the consistent resampling theory is demonstrated by its application to the video de-interlacing problem, image processing, the demodulation of ultra-wideband communication signals and mobile channel detection. The results show that the proposed resampling system has many advantages over existing approaches, including lower computational and time complexities, more accurate prediction of system performances, as well as robustness against noise.
44

Resampling in particle filters

Hol, Jeroen D. January 2004 (has links)
<p>In this report a comparison is made between four frequently encountered resampling algorithms for particle filters. A theoretical framework is introduced to be able to understand and explain the differences between the resampling algorithms. This facilitates a comparison of the algorithms based on resampling quality and on computational complexity. Using extensive Monte Carlo simulations the theoretical results are verified. It is found that systematic resampling is favourable, both in resampling quality and computational complexity.</p>
45

Monte Carlo based Threat Assessment: An in depth Analysis

Danielsson, Simon January 2007 (has links)
<p>This thesis presents improvements and extensions of a previously presented threat assessment algorithm. The algorithm uses Monte Carlo simulation to find threats in a road scene. It is shown that, by using a wider sample distribution and only apply the most likely samples from the Monte Carlo simulation, for the threat assessment, improved results are obtained. By using this method more realistic paths will be chosen by the simulated vehicles and more complex traffic situations will be adequately handled.</p><p>An improvement of the dynamic model is also suggested, which improves the realism of the Monte Carlo simulations. Using the new dynamic model less false positive and more valid threats are detected.</p><p>A systematic method to choose parameters in a stochastic space, using optimisation, is suggested. More realistic trajectories can be chosen, by applying this method on the parameters that represents the human behaviour, in the threat assessment algorithm.</p><p>A new definition of obstacles in a road scene is suggested, dividing them into two groups, Hard and Soft obstacles. A change to the resampling step, in the Monte Carlo simulation, using the soft and hard obstacles is also suggested.</p>
46

Development, Application and Evaluation of Statistical Tools in Pharmacometric Data Analysis

Lindbom, Lars January 2006 (has links)
<p>Pharmacometrics uses models based on pharmacology, physiology and disease for quantitative analysis of interactions between drugs and patients. The availability of software implementing modern statistical methods is important for efficient model building and evaluation throughout pharmacometric data analyses.</p><p>The aim of this thesis was to facilitate the practical use of available and new statistical methods in the area of pharmacometric data analysis. This involved the development of suitable software tools that allows for efficient use of these methods, characterisation of basic properties and demonstration of their usefulness when applied to real world data. The thesis describes the implementation of a set of statistical methods (the bootstrap, jackknife, case-deletion diagnostics, log-likelihood profiling and stepwise covariate model building), made available as tools through the software Perl-speaks-NONMEM (PsN). The appropriateness of the methods and the consistency of the software tools were evaluated using a large selection of clinical and nonclinical data. Criteria based on clinical relevance were found to be useful components in automated stepwise covariate model building. Their ability to restrict the number of included parameter-covariate relationships while maintaining the predictive performance of the model was demonstrated using the antiarrythmic drug dofetilide. Log-likelihood profiling was shown to be equivalent to the bootstrap for calculating confidence intervals for fixed-effects parameters if an appropriate estimation method is used. The condition number of the covariance matrix for the parameter estimates was shown to be a good indicator of how well resampling methods behave when applied to pharmacometric data analyses using NONMEM. The software developed in this thesis equips modellers with an enhanced set of tools for efficient pharmacometric data analysis. </p>
47

Development, Application and Evaluation of Statistical Tools in Pharmacometric Data Analysis

Lindbom, Lars January 2006 (has links)
Pharmacometrics uses models based on pharmacology, physiology and disease for quantitative analysis of interactions between drugs and patients. The availability of software implementing modern statistical methods is important for efficient model building and evaluation throughout pharmacometric data analyses. The aim of this thesis was to facilitate the practical use of available and new statistical methods in the area of pharmacometric data analysis. This involved the development of suitable software tools that allows for efficient use of these methods, characterisation of basic properties and demonstration of their usefulness when applied to real world data. The thesis describes the implementation of a set of statistical methods (the bootstrap, jackknife, case-deletion diagnostics, log-likelihood profiling and stepwise covariate model building), made available as tools through the software Perl-speaks-NONMEM (PsN). The appropriateness of the methods and the consistency of the software tools were evaluated using a large selection of clinical and nonclinical data. Criteria based on clinical relevance were found to be useful components in automated stepwise covariate model building. Their ability to restrict the number of included parameter-covariate relationships while maintaining the predictive performance of the model was demonstrated using the antiarrythmic drug dofetilide. Log-likelihood profiling was shown to be equivalent to the bootstrap for calculating confidence intervals for fixed-effects parameters if an appropriate estimation method is used. The condition number of the covariance matrix for the parameter estimates was shown to be a good indicator of how well resampling methods behave when applied to pharmacometric data analyses using NONMEM. The software developed in this thesis equips modellers with an enhanced set of tools for efficient pharmacometric data analysis.
48

Organization of information pathways in complex networks

Mirshahvalad, Atieh January 2013 (has links)
A shuman beings, we are continuously struggling to comprehend the mechanism of dierent natural systems. Many times, we face a complex system where the emergent properties of the system at a global level can not be explained by a simple aggregation of the system's components at the micro-level. To better understand the macroscopic system eects, we try to model microscopic events and their interactions. In order to do so, we rely on specialized tools to connect local mechanisms with global phenomena. One such tool is network theory. Networks provide a powerful way of modeling and analyzing complex systems based on interacting elements. The interaction pattern links the elements of the system together and provides a structure that controls how information permeates throughout the system. For example, the passing of information about job opportunities in a society depends on how social ties are organized. The interaction pattern, therefore, often is essential for reconstructing and understanding the global-scale properties of the system. In this thesis, I describe tools and models of network theory that we use and develop to analyze the organization of social or transportation systems. More specifically, we explore complex networks by asking two general questions: First, which mechanistic theoretical models can better explain network formation or spreading processes on networks? And second, what are the signi cant functional units of real networks? For modeling, for example, we introduce a simple agent-based model that considers interacting agents in dynamic networks that in the quest for information generate groups. With the model, we found that the network and the agents' perception are interchangeable; the global network structure and the local information pathways are so entangled that one can be recovered from the other one. For investigating signi cant functional units of a system, we detect, model, and analyze signi cant communities of the network. Previously introduced methods of significance analysis suer from oversimpli ed sampling schemes. We have remedied their shortcomings by proposing two dierent approaches: rst by introducing link prediction and second by using more data when they are available. With link prediction, we can detect statistically signi cant communities in large sparse networks. We test this method on real networks, the sparse network of the European Court of Justice case law, for example, to detect signi cant and insigni cant areas of law. In the presence of large data, on the other hand, we can investigate how underlying assumptions of each method aect the results of the signi cance analysis. We used this approach to investigate dierent methods for detecting signi cant communities of time-evolving networks. We found that, when we highlight and summarize important structural changes in a network, the methods that maintain more dependencies in signi cance analysis can predict structural changes earlier. In summary, we have tried to model the systems with as simple rules as possible to better understand the global properties of the system. We always found that maintaing information about the network structure is essential for explaining important phenomena on the global scale. We conclude that the interaction pattern between interconnected units, the network, is crucial for understanding the global behavior of complex systems because it keeps the system integrated. And remember, everything is connected, albeit not always directly.
49

Resampling in particle filters

Hol, Jeroen D. January 2004 (has links)
In this report a comparison is made between four frequently encountered resampling algorithms for particle filters. A theoretical framework is introduced to be able to understand and explain the differences between the resampling algorithms. This facilitates a comparison of the algorithms based on resampling quality and on computational complexity. Using extensive Monte Carlo simulations the theoretical results are verified. It is found that systematic resampling is favourable, both in resampling quality and computational complexity.
50

Nonparametric Methods for Point Processes and Geostatistical Data

Kolodziej, Elizabeth Young 2010 August 1900 (has links)
In this dissertation, we explore the properties of correlation structure for spatio-temporal point processes and a quantitative spatial process. Spatio-temporal point processes are often assumed to be separable; we propose a formal approach for testing whether a particular data set is indeed separable. Because of the resampling methodology, the approach requires minimal conditions on the underlying spatio-temporal process to perform the hypothesis test, and thus is appropriate for a wide class of models. Africanized Honey Bees (AHBs, Apis mellifera scutellata) abscond more frequently and defend more quickly than colonies of European origin. That they also utilize smaller cavities for building colonies expands their range of suitable hive locations to common objects in urban environments. The aim of the AHB study is to create a model of this quantitative spatial process to predict where AHBs were more likely to build a colony, and to explore what variables might be related to the occurrences of colonies. We constructed two generalized linear models to predict the habitation of water meter boxes, based on surrounding landscape classifications, whether there were colonies in surrounding areas, and other variables. The presence of colonies in the area was a strong predictor of whether AHBs occupied a water meter box, suggesting that AHBs tend to form aggregations, and that the removal of a colony from a water meter box may make other nearby boxes less attractive to the bees.

Page generated in 0.0751 seconds