• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 309
  • 67
  • 48
  • 32
  • 31
  • 18
  • 16
  • 14
  • 14
  • 9
  • 4
  • 3
  • 3
  • 2
  • 2
  • Tagged with
  • 707
  • 707
  • 374
  • 374
  • 153
  • 152
  • 105
  • 79
  • 69
  • 69
  • 66
  • 65
  • 64
  • 63
  • 60
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
81

Variable Sampling Rate Control Charts for Monitoring Process Variance

Hughes, Christopher Scott 20 May 1999 (has links)
Industrial processes are subject to changes that can adversely affect product quality. A change in the process that increases the variability of the output of the process causes the output to be less uniform and increases the probability that individual items will not meet specifications. Statistical control charts for monitoring process variance can be used to detect an increase in the variability of the output of a process so that the situation can be repaired and product uniformity restored. Control charts that increase the sampling rate when there is evidence the variance has changed gather information more quickly and detect changes in the variance more quickly (on average) than fixed sampling rate procedures. Several variable sampling rate procedures for detecting increases in the process variance will be developed and compared with fixed sampling rate methods. A control chart for the variance is usually used with a separate control chart for the mean so that changes in the average level of the process and the variability of the process can both be detected. A simple method for applying variable sampling rate techniques to dual monitoring of mean and variance will be developed. This control chart procedure increases the sampling rate when there is evidence the mean or variance has changed so that changes in either parameter that will negatively impact product quality will be detected quickly. / Ph. D.
82

Bayesian Anatomy of Galaxy Structure

Yoon, Ilsang 01 February 2013 (has links)
In this thesis I develop Bayesian approach to model galaxy surface brightness and apply it to a bulge-disc decomposition analysis of galaxies in near-infrared band, from Two Micron All Sky Survey (2MASS). The thesis has three main parts. First part is a technical development of Bayesian galaxy image decomposition package Galphat based on Markov chain Monte Carlo algorithm. I implement a fast and accurate galaxy model image generation algorithm to reduce computation time and make Bayesian approach feasible for real science analysis using large ensemble of galaxies. I perform a benchmark test of Galphat and demonstrate significant improvement in parameter estimation with a correct statistical confidence. Second part is a performance test for full Bayesian application to galaxy bulgedisc decomposition analysis including not only the parameter estimation but also the model comparison to classify different galaxy population. The test demonstrates that Galphat has enough statistical power to make a reliable model inference using galaxy photometric survey data. Bayesian prior update is also tested for parameter estimation and Bayes factor model comparison and it shows that informative prior significantly improves the model inference in every aspects. Last part is a Bayesian bulge-disc decomposition analysis using 2MASS Ks-band selected samples. I characterise the luminosity distributions in spheroids, bulges and discs separately in the local Universe and study the galaxy morphology correlation, by full utilising the ensemble parameter posterior of the entire galaxy samples. It shows that to avoid a biased inference, the parameter covariance and model degeneracy has to be carefully characterised by the full probability distribution.
83

ON PARTICLE METHODS FOR UNCERTAINTY QUANTIFICATION IN COMPLEX SYSTEMS

Yang, Chao January 2017 (has links)
No description available.
84

On a Selection of Advanced Markov Chain Monte Carlo Algorithms for Everyday Use: Weighted Particle Tempering, Practical Reversible Jump, and Extensions

Carzolio, Marcos Arantes 08 July 2016 (has links)
We are entering an exciting era, rich in the availability of data via sources such as the Internet, satellites, particle colliders, telecommunication networks, computer simulations, and the like. The confluence of increasing computational resources, volumes of data, and variety of statistical procedures has brought us to a modern enlightenment. Within the next century, these tools will combine to reveal unforeseeable insights into the social and natural sciences. Perhaps the largest headwind we now face is our collectively slow-moving imagination. Like a car on an open road, learning is limited by its own rate. Historically, slow information dissemination and the unavailability of experimental resources limited our learning. To that point, any methodological contribution that helps in the conversion of data into knowledge will accelerate us along this open road. Furthermore, if that contribution is accessible to others, the speedup in knowledge discovery scales exponentially. Markov chain Monte Carlo (MCMC) is a broad class of powerful algorithms, typically used for Bayesian inference. Despite their variety and versatility, these algorithms rarely become mainstream workhorses because they can be difficult to implement. The humble goal of this work is to bring to the table a few more highly versatile and robust, yet easily-tuned algorithms. Specifically, we introduce weighted particle tempering, a parallelizable MCMC procedure that is adaptable to large computational resources. We also explore and develop a highly practical implementation of reversible jump, the most generalized form of MetropolisHastings. Finally, we combine these two algorithms into reversible jump weighted particle tempering, and apply it on a model and dataset that was partially collected by the author and his collaborators, halfway around the world. It is our hope that by introducing, developing, and exhibiting these algorithms, we can make a reasonable contribution to the ever-growing body of MCMC research. / Ph. D.
85

A discrete-time performance model for congestion control mechanism using queue thresholds with QOS constraints

Guan, Lin, Woodward, Mike E., Awan, Irfan U. January 2005 (has links)
This paper presents a new analytical framework for the congestion control of Internet traffic using a queue threshold scheme. This framework includes two discrete-time analytical models for the performance evaluation of a threshold based congestion control mechanism and compares performance measurements through typical numerical results. To satisfy the low delay along with high throughput, model-I incorporates one threshold to make the arrival process step reduce from arrival rate ¿1 directly to ¿2 once the number of packets in the system has reached the threshold value L1. The source operates normally, otherwise. Model-II incorporates two thresholds to make the arrival rate linearly reduce from ¿1 to ¿2 with system contents when the number of packets in the system is between two thresholds L1 and L2. The source operates normally with arrival rate ¿1 before threshold L1, and with arrival rate ¿2 after the threshold L2. In both performance models, the mean packet delay W, probability of packet loss PL and throughput S have been found as functions of the thresholds and maximum drop probability. The performance comparison results for the two models have also been made through typical numerical results. The results clearly demonstrate how different load settings can provide different tradeoffs between throughput, loss probability and delay to suit different service requirements.
86

Spectrum-efficient Cooperation and Bargaining-based Resource Allocation for Secondary Users in Cognitive Radio Networks

Abdelraheem, Mohamed Medhat Tawfik 20 November 2015 (has links)
Dynamic spectrum access (DSA) is a promising approach to alleviate spectrum scarcity and improve spectrum utilization. Our work aims to enhance the utilization of the available white spaces in the licensed spectrum by enabling cooperative communication in the secondary networks. We investigate the ability of a two-hop cooperative transmission to reduce the effect of primary user interruption on secondary transmissions. We analyze the performance of a cooperative secondary transmission by modeling the interaction between primary user and secondary user transmissions using a discrete time Markov chain (DTMC). The analysis shows a significant enhancement in the secondary transmission efficiency and throughput when cooperative transmission is utilized compared to that of direct transmission, especially at high levels of primary user activity. We extend our study to model secondary cooperative transmission in realistic scenarios. We evaluate the throughput performance enhancement in the secondary infrastructure network analytical and by simulation. A simple scenario is modeled analytically by a DTMC that captures the probability of finding intermediate relays according to nodes' density and by discrete event simulation where both results confined each other. We introduce a dedicated cooperative and cognitive Media Access Control (MAC) protocol named CO2MAC to facilitate secondary users transmissions in infrastructure-based secondary networks. The proposed MAC enables utilizing cooperative Multi-Input-Multi-Output (MIMO) transmission techniques to further enhance the throughput performance. By using the proposed MAC, we quantify the enhancement in the throughput of secondary infrastructure networks via simulation for complex scenarios. The results show an enhancement in cooperative transmission throughput compared to that of direct transmission, especially at crowded spectrum due to the ability of cooperative transmissions to reduce the negative effect of primary user interruptions by buffering the data at intermediate relays. Also, the cooperative throughput performance enhances compared to that of direct transmission as the nodes' density increases due to the increase in the probability of finding intermediate relays. After that, we answer two questions. The first question is about the way a secondary user pays the cooperation price to its relay and what are the conditions under which the cooperation is beneficial for both of them. The second question is about how to pair the cooperating nodes and allocate channels in an infrastructure based secondary network. To answer the first question, we model the cooperation between the secondary user and its relay as a resource exchange process, where the secondary user vacates part of its dedicated free spectrum access time to the relay as a price for the energy consumed by the relay in forwarding the secondary user's packets. We define a suitable utility function that combines the throughput and the energy then we apply axiomatic bargaining solutions, namely Nash bargaining solution (NBS) and egalitarian bargaining solution (EBS) to find the new free spectrum access shares for the secondary user and the relay based on the defined utility in the cooperation mode. We show that under certain conditions, the cooperation is beneficial for both the secondary user and the relay where both achieve a higher utility and throughput compared to the non-cooperative mode. Finally, based on the bargaining based shares of the cooperating nodes, the node pairing and channel allocation are optimized for different objectives, namely maximizing the total network throughput or minimizing the maximum unsatisfied demand. Our bargaining based framework shows a comparable performance with the case when the nodes' free spectrum access time shares are jointly optimized with the pairing and allocation process, at the same time, our cooperation framework provides an incentive reward for the secondary users and the relays to involve in cooperation by giving every node a share of the free spectrum that proportional to its utility. We also study the case of using multiple secondary access points which gives more flexibility in node pairing and channel allocation and achieves a better performance in terms of the two defined objectives. / Ph. D.
87

Implication of Terrain Topology Modelling on Ground Vehicle Reliability

Kawale, Sujay J. 14 March 2011 (has links)
The accuracy of computer-based ground vehicle durability and ride quality simulations depends on accurate representation of road surface topology as an excitation to vehicle dynamics simulation software, since most of the excitation input to a vehicle as it traverses terrain is provided by the surface topology. It is not computationally efficient to utilise physically measured terrain topology for these simulations since extremely large data sets would be required to represent terrain of all desired types. Moreover, performing repeated simulations on the same set of measured data would not provide a random character typical of real world usage. There exist several methods of synthesising terrain data through the use of stochastic or mathematical models in order to capture such physical properties of measured terrain as roughness, bank angle and grade. In first part of this work, the autoregressive model and the Markov chain model have been applied to generate synthetic two-dimensional terrain profiles. The synthesised terrain profiles generated are expected to capture the statistical properties of the measured data. A methodology is then proposed; to assess the performance of these models of terrain in capturing the statistical properties of the measured terrain. This is done through the application of several statistical property tests to the measured and synthesized terrain profiles. The second part of this work describes the procedure that has been followed to assess the performance of these models in capturing the vehicle component fatigue-inducing characteristics of the measured terrain, by predicting suspension component fatigue life based on the loading conditions obtained from the measured terrain and the corresponding synthesized terrain. The terrain model assessment methodology presented in this work can be applied to any model of terrain, serving to identify which terrain models are suited to which type of terrain. / Master of Science
88

Bayesian Methods for Intensity Measure and Ground Motion Selection in Performance-Based Earthquake Engineering

Dhulipala, Lakshmi Narasimha Somayajulu 19 March 2019 (has links)
The objective of quantitative Performance-Based Earthquake Engineering (PBEE) is designing buildings that meet the specified performance objectives when subjected to an earthquake. One challenge to completely relying upon a PBEE approach in design practice is the open-ended nature of characterizing the earthquake ground motion by selecting appropriate ground motions and Intensity Measures (IM) for seismic analysis. This open-ended nature changes the quantified building performance depending upon the ground motions and IMs selected. So, improper ground motion and IM selection can lead to errors in structural performance prediction and thus to poor designs. Hence, the goal of this dissertation is to propose methods and tools that enable an informed selection of earthquake IMs and ground motions, with the broader goal of contributing toward a robust PBEE analysis. In doing so, the change of perspective and the mechanism to incorporate additional information provided by Bayesian methods will be utilized. Evaluation of the ability of IMs towards predicting the response of a building with precision and accuracy for a future, unknown earthquake is a fundamental problem in PBEE analysis. Whereas current methods for IM quality assessment are subjective and have multiple criteria (hence making IM selection challenging), a unified method is proposed that enables rating the numerous IMs. This is done by proposing the first quantitative metric for assessing IM accuracy in predicting the building response to a future earthquake, and then by investigating the relationship between precision and accuracy. This unified metric is further expected to provide a pathway toward improving PBEE analysis by allowing the consideration of multiple IMs. Similar to IM selection, ground motion selection is important for PBEE analysis. Consensus on the "right" input motions for conducting seismic response analyses is often varied and dependent on the analyst. Hence, a general and flexible tool is proposed to aid ground motion selection. General here means the tool encompasses several structural types by considering their sensitivities to different ground motion characteristics. Flexible here means the tool can consider additional information about the earthquake process when available with the analyst. Additionally, in support of this ground motion selection tool, a simplified method for seismic hazard analysis for a vector of IMs is developed. This dissertation addresses four critical issues in IM and ground motion selection for PBEE by proposing: (1) a simplified method for performing vector hazard analysis given multiple IMs; (2) a Bayesian framework to aid ground motion selection which is flexible and general to incorporate preferences of the analyst; (3) a unified metric to aid IM quality assessment for seismic fragility and demand hazard assessment; (4) Bayesian models for capturing heteroscedasticity (non-constant standard deviation) in seismic response analyses which may further influence IM selection. / Doctor of Philosophy / Earthquake ground shaking is a complex phenomenon since there is no unique way to assess its strength. Yet, the strength of ground motion (shaking) becomes an integral part for predicting the future earthquake performance of buildings using the Performance-Based Earthquake Engineering (PBEE) framework. The PBEE framework predicts building performance in terms of expected financial losses, possible downtime, the potential of the building to collapse under a future earthquake. Much prior research has shown that the predictions made by the PBEE framework are heavily dependent upon how the strength of a future earthquake ground motion is characterized. This dependency leads to uncertainty in the predicted building performance and hence its seismic design. The goal of this dissertation therefore is to employ Bayesian reasoning, which takes into account the alternative explanations or perspectives of a research problem, and propose robust quantitative methods that aid IM selection and ground motion selection in PBEE The fact that the local intensity of an earthquake can be characterized in multiple ways using Intensity Measures (IM; e.g., peak ground acceleration) is problematic for PBEE because it leads to different PBEE results for different choices of the IM. While formal procedures for selecting an optimal IM exist, they may be considered as being subjective and have multiple criteria making their use difficult and inconclusive. Bayes rule provides a mechanism called change of perspective using which a problem that is difficult to solve from one perspective could be tackled from a different perspective. This change of perspective mechanism is used to propose a quantitative, unified metric for rating alternative IMs. The immediate application of this metric is aiding the selection of the best IM that would predict the building earthquake performance with least bias. Structural analysis for performance assessment in PBEE is conducted by selecting ground motions which match a target response spectrum (a representation of future ground motions). The definition of a target response spectrum lacks general consensus and is dependent on the analysts’ preferences. To encompass all these preferences and requirements of analysts, a Bayesian target response spectrum which is general and flexible is proposed. While the generality of this Bayesian target response spectrum allow analysts select those ground motions to which their structures are the most sensitive, its flexibility permits the incorporation of additional information (preferences) into the target response spectrum development. This dissertation addresses four critical questions in PBEE: (1) how can we best define ground motion at a site?; (2) if ground motion can only be defined by multiple metrics, how can we easily derive the probability of such shaking at a site?; (3) how do we use these multiple metrics to select a set of ground motion records that best capture the site’s unique seismicity; (4) when those records are used to analyze the response of a structure, how can we be sure that a standard linear regression technique accurately captures the uncertainty in structural response at low and high levels of shaking?
89

Efficient Path and Parameter Inference for Markov Jump Processes

Boqian Zhang (6563222) 15 May 2019 (has links)
<div>Markov jump processes are continuous-time stochastic processes widely used in a variety of applied disciplines. Inference typically proceeds via Markov chain Monte Carlo (MCMC), the state-of-the-art being a uniformization-based auxiliary variable Gibbs sampler. This was designed for situations where the process parameters are known, and Bayesian inference over unknown parameters is typically carried out by incorporating it into a larger Gibbs sampler. This strategy of sampling parameters given path, and path given parameters can result in poor Markov chain mixing.</div><div><br></div><div>In this thesis, we focus on the problem of path and parameter inference for Markov jump processes.</div><div><br></div><div>In the first part of the thesis, a simple and efficient MCMC algorithm is proposed to address the problem of path and parameter inference for Markov jump processes. Our scheme brings Metropolis-Hastings approaches for discrete-time hidden Markov models to the continuous-time setting, resulting in a complete and clean recipe for parameter and path inference in Markov jump processes. In our experiments, we demonstrate superior performance over Gibbs sampling, a more naive Metropolis-Hastings algorithm we propose, as well as another popular approach, particle Markov chain Monte Carlo. We also show our sampler inherits geometric mixing from an ‘ideal’ sampler that is computationally much more expensive.</div><div><br></div><div>In the second part of the thesis, a novel collapsed variational inference algorithm is proposed. Our variational inference algorithm leverages ideas from discrete-time Markov chains, and exploits a connection between Markov jump processes and discrete-time Markov chains through uniformization. Our algorithm proceeds by marginalizing out the parameters of the Markov jump process, and then approximating the distribution over the trajectory with a factored distribution over segments of a piecewise-constant function. Unlike MCMC schemes that marginalize out transition times of a piecewise-constant process, our scheme optimizes the discretization of time, resulting in significant computational savings. We apply our ideas to synthetic data as well as a dataset of check-in recordings, where we demonstrate superior performance over state-of-the-art MCMC methods.</div><div><br></div>
90

Model development of Time dynamic Markov chain to forecast Solar energy production / Modellutveckling av tidsdynamisk Markovkedja, för solenergiprognoser

Bengtsson, Angelica January 2023 (has links)
This study attempts to improve forecasts of solar energy production (SEP), so that energy trading companies can propose more accurate bids to Nord Pool. The aim ismake solar energy a more lucrative business, and therefore lead to more investments in this green energy form. The model that is introduced is a hidden Markov model (HMM) that we call a Time-dynamic Markov-chain (TDMC). The TDMC is presented in general, but applied to the energy sector SE4 in south of Sweden. A simple linear regression model is used to compare with the performance of the TDMC model. Regarding the mean absolute error (MAE) and the root-mean-square error (RMSE), the TDMC model outperforms a simple linear regression; both when the training data is relatively fresh and also when the training data has not been updated in over 300 days. A paired t-test also shows a non-significant deviation from the true SEP per day, at the 0.05 significance level, when simulating the first two months of 2023 with the TDMC model. The simple linear regression model, however, shows a significant difference from reality, in comparison.

Page generated in 0.0418 seconds