• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 309
  • 67
  • 48
  • 32
  • 31
  • 18
  • 16
  • 14
  • 14
  • 9
  • 4
  • 3
  • 3
  • 2
  • 2
  • Tagged with
  • 707
  • 707
  • 374
  • 374
  • 153
  • 152
  • 105
  • 79
  • 69
  • 69
  • 66
  • 65
  • 64
  • 63
  • 60
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
71

Evaluating Wind Power Generating Capacity Adequacy Using MCMC Time Series Model

Almutairi, Abdulaziz 19 September 2014 (has links)
In recent decades, there has been a dramatic increase in utilizing renewable energy resources by many power utilities around the world. The tendency toward using renewable energy resources is mainly due to the environmental concerns and fuel cost escalation associated with conventional fossil generation. Among renewable resources, wind energy is a proven source for power generation that positively contributes to global, social, and economic environments. Nowadays, wind energy is a mature, abundant, and emission-free power generation technology, and a significant percentage of electrical power demand is supplied by wind. However, the intermittent nature of wind generation introduces various challenges for both the operation and planning of power systems. One of the problems of increasing the use of wind generation can be seen from the reliability assessment point of view. Indeed, there is a recognized need to study the contribution of wind generation to overall system reliability and to ensure the adequacy of generation capacity. Wind power generation is different than conventional generation (i.e., fossil-based) in that wind power is variable and non-controllable, which can affect power system reliability. Therefore, modeling wind generation in a reliability assessment calls for reliable stochastic simulation techniques that can properly handle the uncertainty and precisely reflect the variable characteristics of the wind at a particular site. The research presented in this thesis focuses on developing a reliable and appropriate model for the reliability assessment of power system generation, including wind energy sources. This thesis uses the Monte Carlo Markov Chain (MCMC) technique due to its ability to produce synthetic wind power time series data that sufficiently consider the randomness of the wind along with keeping the statistical and temporal characteristics of the measured data. Thereafter, the synthetic wind power time series based on MCMC is coupled with a probabilistic sequential methodology for conventional generation in order to assess the overall adequacy of generating systems. The study presented in this thesis is applied to two test systems, designated the Roy Billinton Test System (RBTS) and the IEEE Reliability Test System (IEEE-RTS). A wide range of reliability indices are then calculated, including loss of load expectation (LOLE), loss of energy expectation (LOEE), loss of load frequency (LOLF), energy not supplied per interruption (ENSPI), demand not supplied per interruption (DNSPI), and expected duration per interruption (EDPI). To show the effectiveness of the proposed methodology, a further study is conducted to compare the obtained reliability indices using the MCMC model and the ARMA model, which is often used in reliability studies. The methodologies and the results illustrated in this thesis aim to provide useful information to planners or developers who endeavor to assess the reliability of power generation systems that contain wind generation.
72

Consensus analysis of networked multi-agent systems with second-order dynamics and Euler-Lagrange dynamics

Mu, Bingxian 30 May 2013 (has links)
Consensus is a central issue in designing multi-agent systems (MASs). How to design control protocols under certain communication topologies is the key for solving consensus problems. This thesis is focusing on investigating the consensus protocols under different scenarios: (1) The second-order system dynamics with Markov time delays; (2) The Euler-Lagrange dynamics with uniform and nonuniform sampling strategies and the event-based control strategy. Chapter 2 is focused on the consensus problem of the multi-agent systems with random delays governed by a Markov chain. For second-order dynamics under the sampled-data setting, we first convert the consensus problem to the stability analysis of the equivalent error system dynamics. By designing a suitable Lyapunov function and deriving a set of linear matrix inequalities (LMIs), we analyze the mean square stability of the error system dynamics with fixed communication topology. Since the transition probabilities in a Markov chain are sometimes partially unknown, we propose a method of estimating the delay for the next sampling time instant. We explicitly give a lower bound of the probability for the delay estimation which can ensure the stability of the error system dynamics. Finally, by applying an augmentation technique, we convert the error system dynamics to a delay-free stochastic system. A sufficient condition is established to guarantee the consensus of the networked multi-agent systems with switching topologies. Simulation studies for a fleet of unmanned vehicles verify the theoretical results. In Chapter 3, we propose the consensus control protocols involving both position and velocity information of the MASs with the linearized Euler-Lagrange dynamics, under uniform sampling and nonuniform sampling schemes, respectively. Then we extend the results to the case of applying the centralized event-triggered strategy, and accordingly analyze the consensus property. Simulation examples and comparisons verify the effectiveness of the proposed methods. / Graduate / 0548
73

Design and Analysis of Sequential Clinical Trials using a Markov Chain Transition Rate Model with Conditional Power

Pond, Gregory Russell 01 August 2008 (has links)
Background: There are a plethora of potential statistical designs which can be used to evaluate efficacy of a novel cancer treatment in the phase II clinical trial setting. Unfortunately, there is no consensus as to which design one should prefer, nor even which definition of efficacy should be used and the primary endpoint conclusion can vary depending on which design is chosen. It would be useful if an all-encompassing methodology was possible which could evaluate all the different designs simultaneously and allow investigators an understanding of the trial results under the varying scenarios. Methods: Finite Markov chain imbedding is a method which can be used in the setting of phase II oncology clinical trials but never previously evaluated in this scenario. Simple variations to the transition matrix or end-state probability definitions can be performed which allow for evaluation of multiple designs and endpoints for a single trial. A computer program is written in R which allows for computation of p-values and conditional power, two common statistical measures used for evaluation of trial results. A simulation study is performed on data arising from an actual phase II clinical trial performed recently in which the study conclusion regarding the efficacy of the potential treatment was debatable. Results: Finite Markov chain imbedding is shown to be useful for evaluating phase II oncology clinical trial results. The R code written for evaluating the simulation study is demonstrated to be fast and useful for investigating different trial designs. Further detail regarding the clinical trial results are presented, including the potential prolongation of stable disease of the treatment, which is a potentially useful marker of efficacy for this cytostatic agent. Conclusions: This novel methodology may prove to be an useful investigative technique for the evaluation of phase II oncology clinical trial data. Future studies which have disputable conclusions might become less controversial with the aid of finite Markov chain imbedding and the possible multiple evaluations which is now viable. Better understanding of activity for a given treatment might expedite the drug development process or help distinguish active from inactive treatments
74

Fast Algorithms for Large-Scale Phylogenetic Reconstruction

Truszkowski, Jakub January 2013 (has links)
One of the most fundamental computational problems in biology is that of inferring evolutionary histories of groups of species from sequence data. Such evolutionary histories, known as phylogenies are usually represented as binary trees where leaves represent extant species, whereas internal nodes represent their shared ancestors. As the amount of sequence data available to biologists increases, very fast phylogenetic reconstruction algorithms are becoming necessary. Currently, large sequence alignments can contain up to hundreds of thousands of sequences, making traditional methods, such as Neighbor Joining, computationally prohibitive. To address this problem, we have developed three novel fast phylogenetic algorithms. The first algorithm, QTree, is a quartet-based heuristic that runs in O(n log n) time. It is based on a theoretical algorithm that reconstructs the correct tree, with high probability, assuming every quartet is inferred correctly with constant probability. The core of our algorithm is a balanced search tree structure that enables us to locate an edge in the tree in O(log n) time. Our algorithm is several times faster than all the current methods, while its accuracy approaches that of Neighbour Joining. The second algorithm, LSHTree, is the first sub-quadratic time algorithm with theoretical performance guarantees under a Markov model of sequence evolution. Our new algorithm runs in O(n^{1+γ(g)} log^2 n) time, where γ is an increasing function of an upper bound on the mutation rate along any branch in the phylogeny, and γ(g) < 1 for all g. For phylogenies with very short branches, the running time of our algorithm is close to linear. In experiments, our prototype implementation was more accurate than the current fast algorithms, while being comparably fast. In the final part of this thesis, we apply the algorithmic framework behind LSHTree to the problem of placing large numbers of short sequence reads onto a fixed phylogenetic tree. Our initial results in this area are promising, but there are still many challenges to be resolved.
75

Bayesian Inference for Stochastic Volatility Models

Men, Zhongxian January 1012 (has links)
Stochastic volatility (SV) models provide a natural framework for a representation of time series for financial asset returns. As a result, they have become increasingly popular in the finance literature, although they have also been applied in other fields such as signal processing, telecommunications, engineering, biology, and other areas. In working with the SV models, an important issue arises as how to estimate their parameters efficiently and to assess how well they fit real data. In the literature, commonly used estimation methods for the SV models include general methods of moments, simulated maximum likelihood methods, quasi Maximum likelihood method, and Markov Chain Monte Carlo (MCMC) methods. Among these approaches, MCMC methods are most flexible in dealing with complicated structure of the models. However, due to the difficulty in the selection of the proposal distribution for Metropolis-Hastings methods, in general they are not easy to implement and in some cases we may also encounter convergence problems in the implementation stage. In the light of these concerns, we propose in this thesis new estimation methods for univariate and multivariate SV models. In the simulation of latent states of the heavy-tailed SV models, we recommend the slice sampler algorithm as the main tool to sample the proposal distribution when the Metropolis-Hastings method is applied. For the SV models without heavy tails, a simple Metropolis-Hastings method is developed for simulating the latent states. Since the slice sampler can adapt to the analytical structure of the underlying density, it is more efficient. A sample point can be obtained from the target distribution with a few iterations of the sampler, whereas in the original Metropolis-Hastings method many sampled values often need to be discarded. In the analysis of multivariate time series, multivariate SV models with more general specifications have been proposed to capture the correlations between the innovations of the asset returns and those of the latent volatility processes. Due to some restrictions on the variance-covariance matrix of the innovation vectors, the estimation of the multivariate SV (MSV) model is challenging. To tackle this issue, for a very general setting of a MSV model we propose a straightforward MCMC method in which a Metropolis-Hastings method is employed to sample the constrained variance-covariance matrix, where the proposal distribution is an inverse Wishart distribution. Again, the log volatilities of the asset returns can then be simulated via a single-move slice sampler. Recently, factor SV models have been proposed to extract hidden market changes. Geweke and Zhou (1996) propose a factor SV model based on factor analysis to measure pricing errors in the context of the arbitrage pricing theory by letting the factors follow the univariate standard normal distribution. Some modification of this model have been proposed, among others, by Pitt and Shephard (1999a) and Jacquier et al. (1999). The main feature of the factor SV models is that the factors follow a univariate SV process, where the loading matrix is a lower triangular matrix with unit entries on the main diagonal. Although the factor SV models have been successful in practice, it has been recognized that the order of the component may affect the sample likelihood and the selection of the factors. Therefore, in applications, the component order has to be considered carefully. For instance, the factor SV model should be fitted to several permutated data to check whether the ordering affects the estimation results. In the thesis, a new factor SV model is proposed. Instead of setting the loading matrix to be lower triangular, we set it to be column-orthogonal and assume that each column has unit length. Our method removes the permutation problem, since when the order is changed then the model does not need to be refitted. Since a strong assumption is imposed on the loading matrix, the estimation seems even harder than the previous factor models. For example, we have to sample columns of the loading matrix while keeping them to be orthonormal. To tackle this issue, we use the Metropolis-Hastings method to sample the loading matrix one column at a time, while the orthonormality between the columns is maintained using the technique proposed by Hoff (2007). A von Mises-Fisher distribution is sampled and the generated vector is accepted through the Metropolis-Hastings algorithm. Simulation studies and applications to real data are conducted to examine our inference methods and test the fit of our model. Empirical evidence illustrates that our slice sampler within MCMC methods works well in terms of parameter estimation and volatility forecast. Examples using financial asset return data are provided to demonstrate that the proposed factor SV model is able to characterize the hidden market factors that mainly govern the financial time series. The Kolmogorov-Smirnov tests conducted on the estimated models indicate that the models do a reasonable job in terms of describing real data.
76

Delay analysis of molecular communication using filaments and relay-enabled nodes

Darchinimaragheh, Kamaloddin 17 December 2015 (has links)
In this thesis, we suggest using nano-relays in a network using molecular com- munication in free space to improve the performance of the system in terms of delay. An approximation method for jump diffusion processes, which is based on Markov chains, is used to model molecular propagation in such scenarios. The model is validated through comparing analytic results with simulation results. The results illustrate the advantage of using nano-relays over diffusion in terms of delay. The proposed model is then used to inves- tigate the effect of different parameters, such as filaments’ length and the number of filaments attached to each nano-relay, on the delay performance of the communication technique. We used transient solution of the model in the first set of results. How- ever, stationary solution of the model can generate useful results, too. In the second set of results, the model is extended for an unbounded scenario. Con- sidering the propagation as a one-sided skip free process and using matrix analytic methods, we find the final distribution for the position of informa- tion molecules. It will be shown that it is possible to keep molecules in a desired region. The effect of different parameters on the final distribution for the position of information molecules is investigated, too. This analysis can be useful in drug delivery applications. / February 2016
77

Planejamento econômico de gráficos de controle X para monitoramento de processos autocorrelacionados

Franco, Bruno Chaves [UNESP] 15 June 2011 (has links) (PDF)
Made available in DSpace on 2014-06-11T19:26:18Z (GMT). No. of bitstreams: 0 Previous issue date: 2011-06-15Bitstream added on 2014-06-13T19:25:33Z : No. of bitstreams: 1 franco_bc_me_guara.pdf: 1492178 bytes, checksum: 2c201ad6660278573573b0b255349982 (MD5) / Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES) / Esta pesquisa propõe o planejamento econômico de gráficos de controle ̅ para o monitoramento de uma característica de qualidade na qual as observações se ajustam a um modelo autorregressivo de primeira ordem com erro adicional. O modelo de custos de Duncan é usado para selecionar os parâmetros do gráfico, tamanho da amostra, intervalo de amostragem e os limites de controle. Utiliza-se o algoritmo genético na busca do custo mínimo de monitoramento. Para determinação do número médio de amostras até o sinal e o número de alarmes falsos são utilizadas Cadeias de Markov. Uma análise de sensibilidade mostrou que a autocorrelação provoca efeito adverso nos parâmetros do gráfico elevando seu custo de monitoramento e reduzindo sua eficiência / This research proposes an economic design of a ̅ control chart used to monitor a quality characteristic whose observations fit to a first-order autoregressive model with additional error. The Duncan's cost model is used to select the control chart parameters, namely the sample size, the sampling interval and the control limit coefficient, using genetic algorithm in order to search the minimum cost. The Markov chain is used to determine the average number of samples until the signal and the number of false alarms. A sensitivity analysis showed that the autocorrelation causes adverse effect on the parameters of the control chart increasing monitoring cost and reducing significantly their efficiency
78

Detecção automática de fibrilação atrial através de modelos Markovianos. / Atrial fibrillation automatic detection through Markov models.

Ana Paula Brambila 27 March 2008 (has links)
A fibrilação atrial (FA) é um dos tipos mais freqüentes de arritmia cardíaca e é caracterizada principalmente pela aleatoriedade na ocorrência dos batimentos do coração. Sob este aspecto, a fibrilação atrial pode ser considerada um processo estocástico e por isso tem sido freqüentemente modelada através de cadeias de Markov. Seguindo trabalhos anteriores sobre este tópico, este trabalho modela seqüências temporais de batimentos cardíacos como um processo markoviano de três estados para detecção automática de FA. O modelo foi treinado e desenvolvido através dos sinais da base de dados MIT-BIH. Outro método mais consolidado na detecção de FA, denominado \"Razão RR\", também foi implementado, com o objetivo de comparar os resultados do Modelo Markoviano. A avaliação de desempenho para ambos os métodos implementados fo i realizada medindo-se a sensibilidade (Se) e o valor preditivo positivo (+P) para a detecção de FA. Estes dois métodos - Modelos Markovianos e \"Razão RR\" - tiveram seus coeficientes e limiares otimizados com o objetivo de maximizar, ao mesmo tempo, os valores de Se e +P. Após a otimização, ambos os métodos foram testados com uma nova base de dados, independente da base de dados de desenvolvimento. Os resultados obtidos com a base de dados de teste foram Se=84,940% e +P=81,579%, consolidando os Modelos Markoviano s para detecção de batimentos aleatórios. / Atrial fibrillation (AF) is one of the most common cardiac arrhythmia and it is mainly characterized by the presence of random RR intervals. In this way, atrial fibrillation has been studied as a stochastic process and it has been often modeled through Markov chains. Following previous studies on this subject, this work models time sequences of heartbeats as a three states Markov process for AF automatic detection. The model was trained and developed using signals from MIT-BIH database. Another consolidated method for AF detection, called \"RR Ratios\", was also applied to compare Markov Model\'s results. The performance evaluation of both methods was measured through sensitivity (Se) and positive predictive (+P) for AF detection. These two methods - Markov Model and \"RR Ratio\" - had their coefficients and thresholds optimized in order to maximize the values of Se and +P at the same time. After optimization, both methods were tested with another database, independent of development database. The obtained results were Se = 84,940% and +P = 81,579%, consolidating Markov Models for detecting random heartbeats.
79

A Markov Chain Based Method for Time Series Data Modeling and Prediction

Wang, Yinglu 09 November 2020 (has links)
No description available.
80

An Economic Evaluation of Substitution in Multi-period Service and Consumable Parts Supply Chains for Low Volume, High Value Components with Dissimilar Reliability

Hertzler, Christopher 01 May 2010 (has links)
Service parts management is an integral component of customer satisfaction. The service parts supply chain has a number of unique challenges that differentiate it from retail and manufacturing supply chains. These challenges include: unpredictable and lumpy demand, limited storage capacity, high demand service rate requirements, and high risk of obsolescence. This research focuses on the use of substitution as a policy tool to aid in service part supply chain management; particularly with respect to low inventory and high dollar value components. In one part of this dissertation, a Markov chain is used to model unidirectional substitution with dissimilar part reliability. In addition, this work investigates probabilistic substitution policies that allow substitution to be employed on a partial basis. This research also utilizes a Poisson process to explore steady state optimization with probabilistic substitution for a model in which a non-primary part is utilized solely as a substitute for primary parts. The models demonstrate that both substitution protocols can significantly enhance customer performance benchmarks. Unidirectional substitution policies improve fill rate and backorder levels for the machine upon which substitution is performed. The price of this improvement is the cost of additional ordering and inventory, along with decreased fill rate and backorder performance, on the machine whose parts are used for substitution. Substitution, using a part solely carried for that purpose, increases performance levels without higher inventory levels of either primary part. However, this type of substitution requires the inclusion of an additional inventory part and the associated costs.

Page generated in 0.0368 seconds