Spelling suggestions: "subject:"1mportance ampling"" "subject:"1mportance campling""
41 |
Estimation of Probability of Failure for Damage-Tolerant Aerospace StructuresHalbert, Keith January 2014 (has links)
The majority of aircraft structures are designed to be damage-tolerant such that safe operation can continue in the presence of minor damage. It is necessary to schedule inspections so that minor damage can be found and repaired. It is generally not possible to perform structural inspections prior to every flight. The scheduling is traditionally accomplished through a deterministic set of methods referred to as Damage Tolerance Analysis (DTA). DTA has proven to produce safe aircraft but does not provide estimates of the probability of failure of future flights or the probability of repair of future inspections. Without these estimates maintenance costs cannot be accurately predicted. Also, estimation of failure probabilities is now a regulatory requirement for some aircraft. The set of methods concerned with the probabilistic formulation of this problem are collectively referred to as Probabilistic Damage Tolerance Analysis (PDTA). The goal of PDTA is to control the failure probability while holding maintenance costs to a reasonable level. This work focuses specifically on PDTA for fatigue cracking of metallic aircraft structures. The growth of a crack (or cracks) must be modeled using all available data and engineering knowledge. The length of a crack can be assessed only indirectly through evidence such as non-destructive inspection results, failures or lack of failures, and the observed severity of usage of the structure. The current set of industry PDTA tools are lacking in several ways: they may in some cases yield poor estimates of failure probabilities, they cannot realistically represent the variety of possible failure and maintenance scenarios, and they do not allow for model updates which incorporate observed evidence. A PDTA modeling methodology must be flexible enough to estimate accurately the failure and repair probabilities under a variety of maintenance scenarios, and be capable of incorporating observed evidence as it becomes available. This dissertation describes and develops new PDTA methodologies that directly address the deficiencies of the currently used tools. The new methods are implemented as a free, publicly licensed and open source R software package that can be downloaded from the Comprehensive R Archive Network. The tools consist of two main components. First, an explicit (and expensive) Monte Carlo approach is presented which simulates the life of an aircraft structural component flight-by-flight. This straightforward MC routine can be used to provide defensible estimates of the failure probabilities for future flights and repair probabilities for future inspections under a variety of failure and maintenance scenarios. This routine is intended to provide baseline estimates against which to compare the results of other, more efficient approaches. Second, an original approach is described which models the fatigue process and future scheduled inspections as a hidden Markov model. This model is solved using a particle-based approximation and the sequential importance sampling algorithm, which provides an efficient solution to the PDTA problem. Sequential importance sampling is an extension of importance sampling to a Markov process, allowing for efficient Bayesian updating of model parameters. This model updating capability, the benefit of which is demonstrated, is lacking in other PDTA approaches. The results of this approach are shown to agree with the results of the explicit Monte Carlo routine for a number of PDTA problems. Extensions to the typical PDTA problem, which cannot be solved using currently available tools, are presented and solved in this work. These extensions include incorporating observed evidence (such as non-destructive inspection results), more realistic treatment of possible future repairs, and the modeling of failure involving more than one crack (the so-called continuing damage problem). The described hidden Markov model / sequential importance sampling approach to PDTA has the potential to improve aerospace structural safety and reduce maintenance costs by providing a more accurate assessment of the risk of failure and the likelihood of repairs throughout the life of an aircraft. / Statistics
|
42 |
Towards Scalable Machine Learning with Privacy ProtectionFay, Dominik January 2023 (has links)
The increasing size and complexity of datasets have accelerated the development of machine learning models and exposed the need for more scalable solutions. This thesis explores challenges associated with large-scale machine learning under data privacy constraints. With the growth of machine learning models, traditional privacy methods such as data anonymization are becoming insufficient. Thus, we delve into alternative approaches, such as differential privacy. Our research addresses the following core areas in the context of scalable privacy-preserving machine learning: First, we examine the implications of data dimensionality on privacy for the application of medical image analysis. We extend the classification algorithm Private Aggregation of Teacher Ensembles (PATE) to deal with high-dimensional labels, and demonstrate that dimensionality reduction can be used to improve privacy. Second, we consider the impact of hyperparameter selection on privacy. Here, we propose a novel adaptive technique for hyperparameter selection in differentially gradient-based optimization. Third, we investigate sampling-based solutions to scale differentially private machine learning to dataset with a large number of records. We study the privacy-enhancing properties of importance sampling, highlighting that it can outperform uniform sub-sampling not only in terms of sample efficiency but also in terms of privacy. The three techniques developed in this thesis improve the scalability of machine learning while ensuring robust privacy protection, and aim to offer solutions for the effective and safe application of machine learning in large datasets. / Den ständigt ökande storleken och komplexiteten hos datamängder har accelererat utvecklingen av maskininlärningsmodeller och gjort behovet av mer skalbara lösningar alltmer uppenbart. Den här avhandlingen utforskar tre utmaningar förknippade med storskalig maskininlärning under dataskyddskrav. För stora och komplexa maskininlärningsmodeller blir traditionella metoder för integritet, såsom datananonymisering, otillräckliga. Vi undersöker därför alternativa tillvägagångssätt, såsom differentiell integritet. Vår forskning behandlar följande utmaningar inom skalbar och integitetsmedveten maskininlärning: För det första undersöker vi hur hög data-dimensionalitet påverkar integriteten för medicinsk bildanalys. Vi utvidgar klassificeringsalgoritmen Private Aggregation of Teacher Ensembles (PATE) för att hantera högdimensionella etiketter och visar att dimensionsreducering kan användas för att förbättra integriteten. För det andra studerar vi hur valet av hyperparametrar påverkar integriteten. Här föreslår vi en ny adaptiv teknik för val av hyperparametrar i gradient-baserad optimering med garantier på differentiell integritet. För det tredje granskar vi urvalsbaserade lösningar för att skala differentiellt privat maskininlärning till stora datamängder. Vi studerar de integritetsförstärkande egenskaperna hos importance sampling och visar att det kan överträffa ett likformigt urval av sampel, inte bara när det gäller effektivitet utan även för integritet. De tre teknikerna som utvecklats i denna avhandling förbättrar skalbarheten för integritetsskyddad maskininlärning och syftar till att erbjuda lösningar för effektiv och säker tillämpning av maskininlärning på stora datamängder. / <p>QC 20231101</p>
|
43 |
Simulation and analytic evaluation of false alarm probability of a non-linear detectorAmirichimeh, Reza, 1958- January 1991 (has links)
One would like to evaluate and compare complex digital communication systems based upon their overall bit error rate. Unfortunately, analytical expressions for bit error rate for even simple communication systems are notoriously difficult to evaluate accurately. Therefore, communication engineers often resort to simulation techniques to evaluate these error probabilities. In this thesis importance sampling techniques (variations of standard Monte Carlo methods) are studied in relation to both linear and non-linear detectors. Quick simulation, an importance sampling method based upon the asymptotics of the error estimator, is studied in detail. The simulated error probabilities are compared to values obtained by numerically inverting Laplace Transform expressions for these quantities.
|
44 |
Embedding population dynamics in mark-recapture modelsBishop, Jonathan R. B. January 2009 (has links)
Mark-recapture methods use repeated captures of individually identifiable animals to provide estimates of properties of populations. Different models allow estimates to be obtained for population size and rates of processes governing population dynamics. State-space models consist of two linked processes evolving simultaneously over time. The state process models the evolution of the true, but unknown, states of the population. The observation process relates observations on the population to these true states. Mark-recapture models specified within a state-space framework allow population dynamics models to be embedded in inference ensuring that estimated changes in the population are consistent with assumptions regarding the biology of the modelled population. This overcomes a limitation of current mark-recapture methods. Two alternative approaches are considered. The "conditional" approach conditions on known numbers of animals possessing capture history patterns including capture in the current time period. An animal's capture history determines its state; consequently, capture parameters appear in the state process rather than the observation process. There is no observation error in the model. Uncertainty occurs only through the numbers of animals not captured in the current time period. An "unconditional" approach is considered in which the capture histories are regarded as observations. Consequently, capture histories do not influence an animal's state and capture probability parameters appear in the observation process. Capture histories are considered a random realization of the stochastic observation process. This is more consistent with traditional mark-recapture methods. Development and implementation of particle filtering techniques for fitting these models under each approach are discussed. Simulation studies show reasonable performance for the unconditional approach and highlight problems with the conditional approach. Strengths and limitations of each approach are outlined, with reference to Soay sheep data analysis, and suggestions are presented for future analyses.
|
45 |
Simulation d'évènements rares par Monte Carlo dans les réseaux hautement fiables / Rare event simulation using Monte Carlo in highly reliable networksSaggadi, Samira 08 July 2013 (has links)
Le calcul de la fiabilité des réseaux est en général un problème NP-difficile. On peut par exemple s’intéresser à la fiabilité des systèmes de télécommunications où l'on veut évaluer la probabilité qu’un groupe sélectionné de nœuds peuvent communiquer. Dans ce cas, un ensemble de nœuds déconnectés peut avoir des conséquences critiques, que ce soit financières ou au niveau de la sécurité. Une estimation précise de la fiabilité est ainsi nécessaire. Dans le cadre de ce travail, on s'intéresse à l’étude et au calcul de la fiabilité des réseaux hautement fiables. Dans ce cas la défiabilité est très petite, ce qui rend l’approche standard de Monte Carlo inutile, car elle nécessite un grand nombre d’itérations. Pour une bonne estimation de la fiabilité des réseaux au moindre coût, nous avons développé de nouvelles techniques de simulation basées sur la réduction de variance par échantillonnage préférentiel. / Network reliability determination, is an NP-hard problem. For instance, in telecommunications, it is desired to evaluate the probability that a selected group of nodes communicate or not. In this case, a set of disconnected nodes can lead to critical financials security consequences. A precise estimation of the reliability is, therefore, needed. In this work, we are interested in the study and the calculation of the reliability of highly reliable networks. In this case the unreliability is very small, which makes the standard Monte Carlo approach useless, because it requires a large number of iterations. For a good estimation of system reliability with minimum cost, we have developed new simulation techniques based on variance reduction using importance sampling.
|
46 |
Stochastic Modeling and Simulation of Gene NetworksXu, Zhouyi 06 May 2010 (has links)
Recent research in experimental and computational biology has revealed the necessity of using stochastic modeling and simulation to investigate the functionality and dynamics of gene networks. However, there is no sophisticated stochastic modeling techniques and efficient stochastic simulation algorithms (SSA) for analyzing and simulating gene networks. Therefore, the objective of this research is to design highly efficient and accurate SSAs, to develop stochastic models for certain real gene networks and to apply stochastic simulation to investigate such gene networks. To achieve this objective, we developed several novel efficient and accurate SSAs. We also proposed two stochastic models for the circadian system of Drosophila and simulated the dynamics of the system. The K-leap method constrains the total number of reactions in one leap to a properly chosen number thereby improving simulation accuracy. Since the exact SSA is a special case of the K-leap method when K=1, the K-leap method can naturally change from the exact SSA to an approximate leap method during simulation if necessary. The hybrid tau/K-leap and the modified K-leap methods are particularly suitable for simulating gene networks where certain reactant molecular species have a small number of molecules. Although the existing tau-leap methods can significantly speed up stochastic simulation of certain gene networks, the mean of the number of firings of each reaction channel is not equal to the true mean. Therefore, all existing tau-leap methods produce biased results, which limit simulation accuracy and speed. Our unbiased tau-leap methods remove the bias in simulation results that exist in all current leap SSAs and therefore significantly improve simulation accuracy without sacrificing speed. In order to efficiently estimate the probability of rare events in gene networks, we applied the importance sampling technique to the next reaction method (NRM) of the SSA and developed a weighted NRM (wNRM). We further developed a systematic method for selecting the values of importance sampling parameters. Applying our parameter selection method to the wSSA and the wNRM, we get an improved wSSA (iwSSA) and an improved wNRM (iwNRM), which can provide substantial improvement over the wSSA in terms of simulation efficiency and accuracy. We also develop a detailed and a reduced stochastic model for circadian rhythm in Drosophila and employ our SSA to simulate circadian oscillations. Our simulations showed that both models could produce sustained oscillations and that the oscillation is robust to noise in the sense that there is very little variability in oscillation period although there are significant random fluctuations in oscillation peeks. Moreover, although average time delays are essential to simulation of oscillation, random changes in time delays within certain range around fixed average time delay cause little variability in the oscillation period. Our simulation results also showed that both models are robust to parameter variations and that oscillation can be entrained by light/dark circles.
|
47 |
Quasi Importance SamplingHörmann, Wolfgang, Leydold, Josef January 2005 (has links) (PDF)
There arise two problems when the expectation of some function with respect to a nonuniform multivariate distribution has to be computed by (quasi-) Monte Carlo integration: the integrand can have singularities when the domain of the distribution is unbounded and it can be very expensive or even impossible to sample points from a general multivariate distribution. We show that importance sampling is a simple method to overcome both problems. (author's abstract) / Series: Preprint Series / Department of Applied Statistics and Data Processing
|
48 |
Pokročilé simulační metody pro spolehlivostní analýzu konstrukcí / Advanced simulation methods for reliability analysis of structuresGerasimov, Aleksei January 2019 (has links)
The thesis apply to reliability problems approach of Voronoi tessellation, typically used in the field of samples designs evaluation and for Monte Carlo samples reweighing. It is shown, this general technique estimation converges to that of Importance Sampling method despite it does not rely on Importance Sampling's auxiliary density. Consequently, reliability analysis could be divided into sampling itself and assessment of simulation results. As an extension of this idea, adaptive statistical sampling using QHull library was attempted.
|
49 |
The Generalized Multiset Sampler: Theory and Its ApplicationKim, Hang Joon 25 June 2012 (has links)
No description available.
|
50 |
Importance sampling in deep learning : A broad investigation on importance sampling performanceJohansson, Mathias, Lindberg, Emma January 2022 (has links)
Available computing resources play a large part in enabling the training of modern deep neural networks to complete complex computer vision tasks. Improving the efficiency with which this computational power is utilized is highly important for enterprises to improve their networks rapidly. The first few training iterations over the data set often result in substantial gradients from seeing the samples and quick improvements in the network. At later stages, most of the training time is spent on samples that produce tiny gradient updates and are already properly handled. To make neural network training more efficient, researchers have used methods that give more attention to the samples that still produce relatively large gradient updates for the network. The methods used are called ''Importance Sampling''. When used, it reduces the variance in sampling and concentrates the training on the more informative examples. This thesis contributes to the studies on importance sampling by investigating its effectiveness in different contexts. In comparison to other studies, we more extensively examine image classification by exploring different network architectures over a wide range of parameter counts. Similar to earlier studies, we apply several ways of doing importance sampling across several datasets. While most previous research on importance sampling strategies applies it to image classification, our research aims at generalizing the results by applying it to object detection problems on top of image classification. Our research on image classification tasks conclusively suggests that importance sampling can speed up the training of deep neural networks. When performance in convergence is the vital metric, our importance sampling methods show mixed results. For the object detection tasks, preliminary experiments have been conducted. However, the findings lack enough data to demonstrate the effectiveness of importance sampling in object detection conclusively.
|
Page generated in 0.0899 seconds