• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 81
  • 18
  • 13
  • 3
  • 3
  • 3
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 142
  • 142
  • 142
  • 28
  • 25
  • 23
  • 20
  • 19
  • 19
  • 19
  • 18
  • 16
  • 15
  • 15
  • 14
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Rare Events Simulations with Applications to the Performance Evaluation of Wireless Communication Systems

Ben Rached, Nadhir 08 October 2018 (has links)
The probability that a sum of random variables (RVs) exceeds (respectively falls below) a given threshold, is often encountered in the performance analysis of wireless communication systems. Generally, a closed-form expression of the sum distribution does not exist and a naive Monte Carlo (MC) simulation is computationally expensive when dealing with rare events. An alternative approach is represented by the use of variance reduction techniques, known for their efficiency in requiring less computations for achieving the same accuracy requirement. For the right-tail region, we develop a unified hazard rate twisting importance sampling (IS) technique that presents the advantage of being logarithmic efficient for arbitrary distributions under the independence assumption. A further improvement of this technique is then developed wherein the twisting is applied only to the components having more impacts on the probability of interest than others. Another challenging problem is when the components are correlated and distributed according to the Log-normal distribution. In this setting, we develop a generalized hybrid IS scheme based on a mean shifting and covariance matrix scaling techniques and we prove that the logarithmic efficiency holds again for two particular instances. We also propose two unified IS approaches to estimate the left-tail of sums of independent positive RVs. The first applies to arbitrary distributions and enjoys the logarithmic efficiency criterion, whereas the second satisfies the bounded relative error criterion under a mild assumption but is only applicable to the case of independent and identically distributed RVs. The left-tail of correlated Log-normal variates is also considered. In fact, we construct an estimator combining an existing mean shifting IS approach with a control variate technique and prove that it possess the asymptotically vanishing relative error property. A further interesting problem is the left-tail estimation of sums of ordered RVs. Two estimators are presented. The first is based on IS and achieves the bounded relative error under a mild assumption. The second is based on conditional MC approach and achieves the bounded relative error property for the Generalized Gamma case and the logarithmic efficiency for the Log-normal case.
22

Structure in machine learning : graphical models and Monte Carlo methods

Rowland, Mark January 2018 (has links)
This thesis is concerned with two main areas: approximate inference in discrete graphical models, and random embeddings for dimensionality reduction and approximate inference in kernel methods. Approximate inference is a fundamental problem in machine learning and statistics, with strong connections to other domains such as theoretical computer science. At the same time, there has often been a gap between the success of many algorithms in this area in practice, and what can be explained by theory; thus, an important research effort is to bridge this gap. Random embeddings for dimensionality reduction and approximate inference have led to great improvements in scalability of a wide variety of methods in machine learning. In recent years, there has been much work on how the stochasticity introduced by these approaches can be better controlled, and what further computational improvements can be made. In the first part of this thesis, we study approximate inference algorithms for discrete graphical models. Firstly, we consider linear programming methods for approximate MAP inference, and develop our understanding of conditions for exactness of these approximations. Such guarantees of exactness are typically based on either structural restrictions on the underlying graph corresponding to the model (such as low treewidth), or restrictions on the types of potential functions that may be present in the model (such as log-supermodularity). We contribute two new classes of exactness guarantees: the first of these takes the form of particular hybrid restrictions on a combination of graph structure and potential types, whilst the second is given by excluding particular substructures from the underlying graph, via graph minor theory. We also study a particular family of transformation methods of graphical models, uprooting and rerooting, and their effect on approximate MAP and marginal inference methods. We prove new theoretical results on the behaviour of particular approximate inference methods under these transformations, in particular showing that the triplet relaxation of the marginal polytope is unique in being universally rooted. We also introduce a heuristic which quickly picks a rerooting, and demonstrate benefits empirically on models over several graph topologies. In the second part of this thesis, we study Monte Carlo methods for both linear dimensionality reduction and approximate inference in kernel machines. We prove the statistical benefit of coupling Monte Carlo samples to be almost-surely orthogonal in a variety of contexts, and study fast approximate methods of inducing this coupling. A surprising result is that these approximate methods can simultaneously offer improved statistical benefits, time complexity, and space complexity over i.i.d. Monte Carlo samples. We evaluate our methods on a variety of datasets, directly studying their effects on approximate kernel evaluation, as well as on downstream tasks such as Gaussian process regression.
23

Cooperative Localization In Wireless Networked Systems

Castillo-Effen, Mauricio 22 October 2007 (has links)
A novel solution for the localization of wireless networked systems is presented. The solution is based on cooperative estimation, inter-node ranging and strap-down inertial navigation. This approach overrides limitations that are commonly found in currently available localization/positioning solutions. Some solutions, such as GPS, make use of previously deployed infrastructure. In other methods, computations are performed in a central fusion center. In the robotics field, current localization techniques rely on a simultaneous localization and mapping, (SLAM), process, which is slow and requires sensors such as laser range finders or cameras. One of the main attributes of this research is the holistic view of the problem and a systems-engineering approach, which begins with analyzing requirements and establishing metrics for localization. The all encompassing approach provides for concurrent consideration and integration of several aspects of the localization problem, from sensor fusion algorithms for position estimation to the communication protocols required for enabling cooperative localization. As a result, a conceptual solution is presented, which is flexible, general and one that can be adapted to a variety of application scenarios. A major advantage of the solution resides in the utilization of wireless network interfaces for communications and for exteroceptive sensing. In addition, the localization solution can be seamlessly integrated into other localization schemes, which will provide faster convergence, higher accuracy and less latency. Two case-studies for developing the main aspects of cooperative localization were employed. Wireless sensor networks and multi-robot systems, composed of ground robots, provided an information base from which this research was launched. In the wireless sensor network field, novel nonlinear cooperative estimation algorithms are proposed for sequential position estimation. In the field of multi-robot systems the issues of mobility and proprioception, which uses inertial measurement systems for estimating motion, are contemplated. Motion information, in conjunction with range information and communications, can be used for accurate localization and tracking of mobile nodes. A novel partitioning of the sensor fusion problem is presented, which combines an extended Kalman filter for dead-reckoning and particle filters for aiding navigation.
24

Construction of lattice rules for multiple integration based on a weighted discrepancy

Sinescu, Vasile January 2008 (has links)
High-dimensional integrals arise in a variety of areas, including quantum physics, the physics and chemistry of molecules, statistical mechanics and more recently, in financial applications. In order to approximate multidimensional integrals, one may use Monte Carlo methods in which the quadrature points are generated randomly or quasi-Monte Carlo methods, in which points are generated deterministically. One particular class of quasi-Monte Carlo methods for multivariate integration is represented by lattice rules. Lattice rules constructed throughout this thesis allow good approximations to integrals of functions belonging to certain weighted function spaces. These function spaces were proposed as an explanation as to why integrals in many variables appear to be successfully approximated although the standard theory indicates that the number of quadrature points required for reasonable accuracy would be astronomical because of the large number of variables. The purpose of this thesis is to contribute to theoretical results regarding the construction of lattice rules for multiple integration. We consider both lattice rules for integrals over the unit cube and lattice rules suitable for integrals over Euclidean space. The research reported throughout the thesis is devoted to finding the generating vector required to produce lattice rules that have what is termed a low weighted discrepancy . In simple terms, the discrepancy is a measure of the uniformity of the distribution of the quadrature points or in other settings, a worst-case error. One of the assumptions used in these weighted function spaces is that variables are arranged in the decreasing order of their importance and the assignment of weights in this situation results in so-called product weights . In other applications it is rather the importance of group of variables that matters. This situation is modelled by using function spaces in which the weights are general . In the weighted settings mentioned above, the quality of the lattice rules is assessed by the weighted discrepancy mentioned earlier. Under appropriate conditions on the weights, the lattice rules constructed here produce a convergence rate of the error that ranges from O(n−1/2) to the (believed) optimal O(n−1+δ) for any δ gt 0, with the involved constant independent of the dimension.
25

Resampling in particle filters

Hol, Jeroen D. January 2004 (has links)
<p>In this report a comparison is made between four frequently encountered resampling algorithms for particle filters. A theoretical framework is introduced to be able to understand and explain the differences between the resampling algorithms. This facilitates a comparison of the algorithms based on resampling quality and on computational complexity. Using extensive Monte Carlo simulations the theoretical results are verified. It is found that systematic resampling is favourable, both in resampling quality and computational complexity.</p>
26

Theory of Disordered Magnets

Peil, Oleg E. January 2009 (has links)
Studying magnetic properties of disordered alloys is important both for the understanding of phase transformations in alloys and from the point of view of fundamental issues of magnetism in solids. Disorder in a magnetic system can result in unconventional magnetic structures, such as spin glass, which have rather peculiar features. In this Thesis, a rather general approach to studying disordered magnetic alloys from first principles is presented. Phase transformations and magnetic behavior of crystalline substitutional alloys are considered. This approach is exemplified by calculations of an archetypical spin-glass material: the CuMn alloy. First, a general theoretical framework for the description of the thermodynamics of disordered magnetic alloys is given. It is shown that under certain conditions, a complex magnetic system can be reduced to an effective system containing no magnetic degrees of freedom. This substantially simplifies the investigation of phase transformations in magnetic alloys. The effective model is described in terms of material-specific interaction parameters. It is shown that interaction parameters can be obtained from the ground-state property of a disordered alloy which are in turn calculated from first principles by means of highly accurate up-to-date numerical techniques based on the Green's function method. The interaction parameters can subsequently be used in thermodynamic Monte-Carlo simulations to produce the atomic and magnetic structures of an alloy. An example of calculations for the Cu-rich CuMn alloy is given. It is demonstrated that the atomic and magnetic structure of the alloy obtained by the presented approach agrees very well with the results of neutron-scattering experiments for this system. Moreover, numerical simulations enable one to predict the ground state structure of the alloy, which is difficult to observe in experiment due to large atomic diffusion barriers at temperatures close to the temperature of the phase transformation. A general description of a spin glass is given, and difficulties of modeling this type of magnetic systems are discussed. To overcome the difficulties, improved Monte-Carlo methods, such as parallel tempering, overrelaxation technique, and finite-size scaling method of analysis, are introduced. The results for the CuMn alloy are presented.
27

Dynamic Data Driven Application System for Wildfire Spread Simulation

Gu, Feng 14 December 2010 (has links)
Wildfires have significant impact on both ecosystems and human society. To effectively manage wildfires, simulation models are used to study and predict wildfire spread. The accuracy of wildfire spread simulations depends on many factors, including GIS data, fuel data, weather data, and high-fidelity wildfire behavior models. Unfortunately, due to the dynamic and complex nature of wildfire, it is impractical to obtain all these data with no error. Therefore, predictions from the simulation model will be different from what it is in a real wildfire. Without assimilating data from the real wildfire and dynamically adjusting the simulation, the difference between the simulation and the real wildfire is very likely to continuously grow. With the development of sensor technologies and the advance of computer infrastructure, dynamic data driven application systems (DDDAS) have become an active research area in recent years. In a DDDAS, data obtained from wireless sensors is fed into the simulation model to make predictions of the real system. This dynamic input is treated as the measurement to evaluate the output and adjust the states of the model, thus to improve simulation results. To improve the accuracy of wildfire spread simulations, we apply the concept of DDDAS to wildfire spread simulation by dynamically assimilating sensor data from real wildfires into the simulation model. The assimilation system relates the system model and the observation data of the true state, and uses analysis approaches to obtain state estimations. We employ Sequential Monte Carlo (SMC) methods (also called particle filters) to carry out data assimilation in this work. Based on the structure of DDDAS, this dissertation presents the data assimilation system and data assimilation results in wildfire spread simulations. We carry out sensitivity analysis for different densities, frequencies, and qualities of sensor data, and quantify the effectiveness of SMC methods based on different measurement metrics. Furthermore, to improve simulation results, the image-morphing technique is introduced into the DDDAS for wildfire spread simulation.
28

General Adaptive Monte Carlo Bayesian Image Denoising

Zhang, Wen January 2010 (has links)
Image noise reduction, or denoising, is an active area of research, although many of the techniques cited in the literature mainly target additive white noise. With an emphasis on signal-dependent noise, this thesis presents the General Adaptive Monte Carlo Bayesian Image Denoising (GAMBID) algorithm, a model-free approach based on random sampling. Testing is conducted on synthetic images with two different signal-dependent noise types as well as on real synthetic aperture radar and ultrasound images. Results show that GAMBID can achieve state-of-the-art performance, but suffers from some limitations in dealing with textures and fine low-contrast features. These aspects can by addressed in future iterations when GAMBID is expanded to become a versatile denoising framework.
29

Resampling in particle filters

Hol, Jeroen D. January 2004 (has links)
In this report a comparison is made between four frequently encountered resampling algorithms for particle filters. A theoretical framework is introduced to be able to understand and explain the differences between the resampling algorithms. This facilitates a comparison of the algorithms based on resampling quality and on computational complexity. Using extensive Monte Carlo simulations the theoretical results are verified. It is found that systematic resampling is favourable, both in resampling quality and computational complexity.
30

General Adaptive Monte Carlo Bayesian Image Denoising

Zhang, Wen January 2010 (has links)
Image noise reduction, or denoising, is an active area of research, although many of the techniques cited in the literature mainly target additive white noise. With an emphasis on signal-dependent noise, this thesis presents the General Adaptive Monte Carlo Bayesian Image Denoising (GAMBID) algorithm, a model-free approach based on random sampling. Testing is conducted on synthetic images with two different signal-dependent noise types as well as on real synthetic aperture radar and ultrasound images. Results show that GAMBID can achieve state-of-the-art performance, but suffers from some limitations in dealing with textures and fine low-contrast features. These aspects can by addressed in future iterations when GAMBID is expanded to become a versatile denoising framework.

Page generated in 0.059 seconds