Spelling suggestions: "subject:"[een] MARKOV CHAIN"" "subject:"[enn] MARKOV CHAIN""
51 |
A Dynamic Channel Allocation Mechanism with Priorities in Wireless NetworksLin, Hsin-Yuan 27 July 2000 (has links)
Pico-Cellular architecture fully reuses frequency to increase network capacity. However, it will increase the occurance of Handoff due to the small range of cell. Previous works in channel allocations can reduce blocking probability of handoff call, but it may increase blocking probability of new call. As a result, channel utilization is decreased because they can not adapt to network changes.
In this thesis, we present a Dynamic Channel Allocation Mechanism with priority support. All channels and calls are divided into high and low priority. If there is no high_priority channel for high_priority call, high_priority call may downgrade its priority by sacrificing some QoS to utilize low_priority channels. We define two new array for network information status, one is next_cell state, and the other is the transition probability. Next_cell state is used to save prior M Cell_Ids where handoff calls may move to. Transition probability is used to save the probabilities for active calls moving to other neighboring cells. According to next_cell state and transition probability, we can accurately predict the probabilities for mobile hosts moving to other neighboring cells. Therefore, we can dynamically adjust bandwidth reservation requests sending to neighboring cells by the latest transition probability and the number of active calls in this cell.
We analyze the proposed mechanism through a mathematical model. In the model, we build a four-dimension Markov Chain and use MATLAB[41] tool to evaluate blocking probability, channel throughput and utilization. We found out that blocking probability of handoff call can be decreased and channel utilization can be increased through the proposed channel allocation mechanisms with high and low priority support.
|
52 |
Risk Management of Stochastic Investment -- with the Instance of Public Welfare Lottery in TaiwanChen, Wen-Tai 30 July 2003 (has links)
Abstract
With the economic growth as well as the advent of the post-industry age in the 21st century, a variety of investment patterns for people to employ progressively appear. Moreover, there is a tendency in which people¡¦s traditional and conservative manners for investments gradually transform into aggressive ones with a preference of risks. For instance, it has been impossible nowadays for certain simple finance-managing mechanisms, such as the Certificate of Deposit, the Bond or even Stock Markets, to satisfy general investors¡¦ needs. Therefore, more and more financial products have become available, especially a lot of derivative securities like options, futures, or stock options. Furthermore, the activities of horseracing, dog racing or even lottery are no longer taboos in Taiwan. As far as the lotteries in most of the counties are concerned, the government is usually the main host, regarding them as additional effortless sources of tax revenue.
During the earlier periods in Taiwan, the only lottery available was the national ¡§Ai-Guo¡¨ lottery issued by the government. Unfortunately, a lot of unlawful gambling gangs took advantage of the popularity of ¡§Ai-Guo¡¨ lottery by utilizing it as the sources of the so-called ¡§Da-Jia-Le¡¨ lottery, which was an illegal, yet popular underground gambling activity. The ¡§Da-Jia-Le¡¨ lottery not only corrupted the social values and caused a lot of criminal actions, but also made an acute and serious impact on regular economic operations. Consequently, the government resolutely called a halt on the ¡§Ai-Guo¡¨ lottery. Nevertheless, the prevailing gambling mania did decline; on the contrary, it further transformed into a covert and under-the-counter gambling operation attached to the Hong Kong lottery. The craze almost swept the entire country, resulting in more and more social problems and having troubled the authorities for a long period of time. To settle such a predicament, government decided to refer to the experiences of other countries, beginning to launch local lotteries in Taiwan, instead of prohibiting people from attending illicit gambling activities. On the one hand, the government would be able to set the lottery business back on right track and eliminate the root of the underground economic operations, which brought about many social problems. On the other hand, the local lottery would provide extra tax revenue for the administrations as well as more employment opportunities for handicapped people. Accordingly, Taiwanese Lottery was launched in the beginning of the year 2002. Until now, mainly due to the attraction of the surprisingly huge bonus shares offered by the lottery¡¦s highest prize, the public response is overwhelming. The social environment in which the stochastic investment actions become authorized and gradually prevail directly triggers the motivation of this research in further investigating the feasibility of the investment.
The research takes Taiwan Public Welfare Lottery as an investment research case, exploring the game rules of lottery in an attempt to enhance the probability of winning prizes and further testing and evaluating the effectiveness of various simple biting tools adopted by the general public in order to obtain the most effective biting ways for individual investor¡¦s reference. The process of this study is as follows:
1. Testing the randomness of the winning prize numbers in Taiwan Public Welfare Lottery.
2. Estimating the possible distribution of the population via the statistical application.
3. Using the cluster analysis to induce the sub-cluster of the selected numbers according to the patterns of the historical prize numbers.
4. Making use of Markov Chain to select the sub-cluster, which is within an investor¡¦s upper limit of risks.
And consequently, this study reveals the followings:
1. The winning prize numbers are random within study samples.
2. The statistic order of the winning prize numbers is £] distribution.
3. The cluster effect is slightly enhanced when the sub-cluster is chosen through the cluster analysis rather than random ways.
4. The application of Markov Chain has an obvious reinforcement on the selection of sub-cluster.
5. The so-called ¡§Smart Package Biting¡¨, which is adopted by the general public, cannot improve the winning possibility; however, depending on individual investor¡¦s needs, it could be a useful tool for risk management.
Through a seemingly aimless random event, the research explores the possible development of the modeling process, so as to comprehend and experience a manager¡¦s mental course on decision making in a dynamic, capricious or even chaotic managerial environment.
|
53 |
The Distribution of the Length of the Longest Increasing Subsequence in Random Permutations of Arbitrary Multi-setsAl-Meanazel, Ayat 07 October 2015 (has links)
The distribution theory of runs and patterns has a long and rich history. In this dissertation we study the distribution of some run-related statistics in sequences and random permutations of arbitrary multi-sets. Using the finite Markov chain imbedding technique (FMCI), which was proposed by Fu and Koutras (1994), we proposed an alternative method to calculate the exact distribution of the total number of adjacent increasing and adjacent consecutive increasing subsequences in sequences.
Fu and Hsieh (2015) obtained the exact distribution of the length of the longest increasing subsequence in random permutations. To the best of our knowledge, little or no work has been done on the exact distribution of the length of the longest increasing subsequence in random permutations of arbitrary multi-sets. Here we obtained the exact distribution of the length of the longest increasing subsequence in random permutations of arbitrary multi-sets. We also obtain the the exact distribution of the length of the longest increasing subsequence for the set of all permutations of length N generated from {1,2,...,n}. / February 2016
|
54 |
Fast Algorithms for Large-Scale Phylogenetic ReconstructionTruszkowski, Jakub January 2013 (has links)
One of the most fundamental computational problems in biology is that of inferring evolutionary histories of groups of species from sequence data. Such evolutionary histories, known as phylogenies are usually represented as binary trees where leaves represent extant species, whereas internal nodes represent their shared ancestors. As the amount of sequence data available to biologists increases, very fast phylogenetic reconstruction algorithms are becoming necessary. Currently, large sequence alignments can contain up to hundreds of thousands of sequences, making traditional methods, such as Neighbor Joining, computationally prohibitive. To address this problem, we have developed three novel fast phylogenetic algorithms.
The first algorithm, QTree, is a quartet-based heuristic that runs in O(n log n) time. It is based on a theoretical algorithm that reconstructs the correct tree, with high probability, assuming every quartet is inferred correctly with constant probability. The core of our algorithm is a balanced search tree structure that enables us to locate an edge in the tree in O(log n) time. Our algorithm is several times faster than all the current methods, while its accuracy approaches that of Neighbour Joining.
The second algorithm, LSHTree, is the first sub-quadratic time algorithm with theoretical performance guarantees under a Markov model of sequence evolution. Our new algorithm runs in O(n^{1+γ(g)} log^2 n) time, where γ is an increasing function of an upper bound on the mutation rate along any branch in the phylogeny, and γ(g) < 1 for all g. For phylogenies with very short branches, the running time of our algorithm is close to linear. In experiments, our prototype implementation was more accurate than the current fast algorithms, while being comparably fast.
In the final part of this thesis, we apply the algorithmic framework behind LSHTree to the problem of placing large numbers of short sequence reads onto a fixed phylogenetic tree. Our initial results in this area are promising, but there are still many challenges to be resolved.
|
55 |
Evaluating Wind Power Generating Capacity Adequacy Using MCMC Time Series ModelAlmutairi, Abdulaziz 19 September 2014 (has links)
In recent decades, there has been a dramatic increase in utilizing renewable energy resources by many power utilities around the world. The tendency toward using renewable energy resources is mainly due to the environmental concerns and fuel cost escalation associated with conventional fossil generation. Among renewable resources, wind energy is a proven source for power generation that positively contributes to global, social, and economic environments. Nowadays, wind energy is a mature, abundant, and emission-free power generation technology, and a significant percentage of electrical power demand is supplied by wind. However, the intermittent nature of wind generation introduces various challenges for both the operation and planning of power systems. One of the problems of increasing the use of wind generation can be seen from the reliability assessment point of view. Indeed, there is a recognized need to study the contribution of wind generation to overall system reliability and to ensure the adequacy of generation capacity.
Wind power generation is different than conventional generation (i.e., fossil-based) in that wind power is variable and non-controllable, which can affect power system reliability. Therefore, modeling wind generation in a reliability assessment calls for reliable stochastic simulation techniques that can properly handle the uncertainty and precisely reflect the variable characteristics of the wind at a particular site. The research presented in this thesis focuses on developing a reliable and appropriate model for the reliability assessment of power system generation, including wind energy sources. This thesis uses the Monte Carlo Markov Chain (MCMC) technique due to its ability to produce synthetic wind power time series data that sufficiently consider the randomness of the wind along with keeping the statistical and temporal characteristics of the measured data. Thereafter, the synthetic wind power time series based on MCMC is coupled with a probabilistic sequential methodology for conventional generation in order to assess the overall adequacy of generating systems.
The study presented in this thesis is applied to two test systems, designated the Roy Billinton Test System (RBTS) and the IEEE Reliability Test System (IEEE-RTS). A wide range of reliability indices are then calculated, including loss of load expectation (LOLE), loss of energy expectation (LOEE), loss of load frequency (LOLF), energy not supplied per interruption (ENSPI), demand not supplied per interruption (DNSPI), and expected duration per interruption (EDPI). To show the effectiveness of the proposed methodology, a further study is conducted to compare the obtained reliability indices using the MCMC model and the ARMA model, which is often used in reliability studies. The methodologies and the results illustrated in this thesis aim to provide useful information to planners or developers who endeavor to assess the reliability of power generation systems that contain wind generation.
|
56 |
Consensus analysis of networked multi-agent systems with second-order dynamics and Euler-Lagrange dynamicsMu, Bingxian 30 May 2013 (has links)
Consensus is a central issue in designing multi-agent systems (MASs). How to design
control protocols under certain communication topologies is the key for solving
consensus problems. This thesis is focusing on investigating the consensus protocols
under different scenarios: (1) The second-order system dynamics with Markov time
delays; (2) The Euler-Lagrange dynamics with uniform and nonuniform sampling
strategies and the event-based control strategy.
Chapter 2 is focused on the consensus problem of the multi-agent systems with
random delays governed by a Markov chain. For second-order dynamics under the
sampled-data setting, we first convert the consensus problem to the stability analysis
of the equivalent error system dynamics. By designing a suitable Lyapunov function
and deriving a set of linear matrix inequalities (LMIs), we analyze the mean square
stability of the error system dynamics with fixed communication topology. Since
the transition probabilities in a Markov chain are sometimes partially unknown, we
propose a method of estimating the delay for the next sampling time instant. We explicitly give a lower bound of the probability for the delay estimation which can ensure
the stability of the error system dynamics. Finally, by applying an augmentation
technique, we convert the error system dynamics to a delay-free stochastic system. A
sufficient condition is established to guarantee the consensus of the networked multi-agent
systems with switching topologies. Simulation studies for a fleet of unmanned
vehicles verify the theoretical results.
In Chapter 3, we propose the consensus control protocols involving both position
and velocity information of the MASs with the linearized Euler-Lagrange dynamics,
under uniform sampling and nonuniform sampling schemes, respectively. Then we
extend the results to the case of applying the centralized event-triggered strategy, and
accordingly analyze the consensus property. Simulation examples and comparisons
verify the effectiveness of the proposed methods. / Graduate / 0548
|
57 |
Fast Algorithms for Large-Scale Phylogenetic ReconstructionTruszkowski, Jakub January 2013 (has links)
One of the most fundamental computational problems in biology is that of inferring evolutionary histories of groups of species from sequence data. Such evolutionary histories, known as phylogenies are usually represented as binary trees where leaves represent extant species, whereas internal nodes represent their shared ancestors. As the amount of sequence data available to biologists increases, very fast phylogenetic reconstruction algorithms are becoming necessary. Currently, large sequence alignments can contain up to hundreds of thousands of sequences, making traditional methods, such as Neighbor Joining, computationally prohibitive. To address this problem, we have developed three novel fast phylogenetic algorithms.
The first algorithm, QTree, is a quartet-based heuristic that runs in O(n log n) time. It is based on a theoretical algorithm that reconstructs the correct tree, with high probability, assuming every quartet is inferred correctly with constant probability. The core of our algorithm is a balanced search tree structure that enables us to locate an edge in the tree in O(log n) time. Our algorithm is several times faster than all the current methods, while its accuracy approaches that of Neighbour Joining.
The second algorithm, LSHTree, is the first sub-quadratic time algorithm with theoretical performance guarantees under a Markov model of sequence evolution. Our new algorithm runs in O(n^{1+γ(g)} log^2 n) time, where γ is an increasing function of an upper bound on the mutation rate along any branch in the phylogeny, and γ(g) < 1 for all g. For phylogenies with very short branches, the running time of our algorithm is close to linear. In experiments, our prototype implementation was more accurate than the current fast algorithms, while being comparably fast.
In the final part of this thesis, we apply the algorithmic framework behind LSHTree to the problem of placing large numbers of short sequence reads onto a fixed phylogenetic tree. Our initial results in this area are promising, but there are still many challenges to be resolved.
|
58 |
Bayesian Inference for Stochastic Volatility ModelsMen, Zhongxian January 1012 (has links)
Stochastic volatility (SV) models provide a natural framework for a
representation of time series for financial asset returns. As a
result, they have become increasingly popular in the finance
literature, although they have also been applied in other fields
such as signal processing, telecommunications, engineering, biology,
and other areas.
In working with the SV models, an important issue arises as how to
estimate their parameters efficiently and to assess how well they
fit real data. In the literature, commonly used estimation methods
for the SV models include general methods of moments, simulated
maximum likelihood methods, quasi Maximum likelihood method, and
Markov Chain Monte Carlo (MCMC) methods. Among these approaches,
MCMC methods are most flexible in dealing with complicated structure
of the models. However, due to the difficulty in the selection of
the proposal distribution for Metropolis-Hastings methods, in
general they are not easy to implement and in some cases we may also
encounter convergence problems in the implementation stage. In the
light of these concerns, we propose in this thesis new estimation
methods for univariate and multivariate SV models. In the simulation
of latent states of the heavy-tailed SV models, we recommend the
slice sampler algorithm as the main tool to sample the proposal
distribution when the Metropolis-Hastings method is applied. For the
SV models without heavy tails, a simple Metropolis-Hastings method
is developed for simulating the latent states. Since the slice
sampler can adapt to the analytical structure of the underlying
density, it is more efficient. A sample point can be obtained from
the target distribution with a few iterations of the sampler,
whereas in the original Metropolis-Hastings method many sampled
values often need to be discarded.
In the analysis of multivariate time series, multivariate SV models
with more general specifications have been proposed to capture the
correlations between the innovations of the asset returns and those
of the latent volatility processes. Due to some restrictions on the
variance-covariance matrix of the innovation vectors, the estimation
of the multivariate SV (MSV) model is challenging. To tackle this
issue, for a very general setting of a MSV model we propose a
straightforward MCMC method in which a Metropolis-Hastings method is
employed to sample the constrained variance-covariance matrix, where
the proposal distribution is an inverse Wishart distribution. Again,
the log volatilities of the asset returns can then be simulated via
a single-move slice sampler.
Recently, factor SV models have been proposed to extract hidden
market changes. Geweke and Zhou (1996) propose a factor SV model
based on factor analysis to measure pricing errors in the context of
the arbitrage pricing theory by letting the factors follow the
univariate standard normal distribution. Some modification of this
model have been proposed, among others, by Pitt and Shephard (1999a)
and Jacquier et al. (1999). The main feature of the factor SV
models is that the factors follow a univariate SV process, where the
loading matrix is a lower triangular matrix with unit entries on the
main diagonal. Although the factor SV models have been successful in
practice, it has been recognized that the order of the component may
affect the sample likelihood and the selection of the factors.
Therefore, in applications, the component order has to be considered
carefully. For instance, the factor SV model should be fitted to
several permutated data to check whether the ordering affects the
estimation results. In the thesis, a new factor SV model is
proposed. Instead of setting the loading matrix to be lower
triangular, we set it to be column-orthogonal and assume that each
column has unit length. Our method removes the permutation problem,
since when the order is changed then the model does not need to be
refitted. Since a strong assumption is imposed on the loading
matrix, the estimation seems even harder than the previous factor
models. For example, we have to sample columns of the loading matrix
while keeping them to be orthonormal. To tackle this issue, we use
the Metropolis-Hastings method to sample the loading matrix one
column at a time, while the orthonormality between the columns is
maintained using the technique proposed by Hoff (2007). A von
Mises-Fisher distribution is sampled and the generated vector is
accepted through the Metropolis-Hastings algorithm.
Simulation studies and applications to real data are conducted to
examine our inference methods and test the fit of our model.
Empirical evidence illustrates that our slice sampler within MCMC
methods works well in terms of parameter estimation and volatility
forecast. Examples using financial asset return data are provided to
demonstrate that the proposed factor SV model is able to
characterize the hidden market factors that mainly govern the
financial time series. The Kolmogorov-Smirnov tests conducted on
the estimated models indicate that the models do a reasonable job in
terms of describing real data.
|
59 |
Delay analysis of molecular communication using filaments and relay-enabled nodesDarchinimaragheh, Kamaloddin 17 December 2015 (has links)
In this thesis, we suggest using nano-relays in a network using molecular com- munication in free space to improve the performance of the system in terms of delay. An approximation method for jump diffusion processes, which is based on Markov chains, is used to model molecular propagation in such scenarios. The model is validated through comparing analytic results with simulation results. The results illustrate the advantage of using nano-relays over diffusion in terms of delay. The proposed model is then used to inves- tigate the effect of different parameters, such as filaments’ length and the number of filaments attached to each nano-relay, on the delay performance of the communication technique.
We used transient solution of the model in the first set of results. How- ever, stationary solution of the model can generate useful results, too. In the second set of results, the model is extended for an unbounded scenario. Con- sidering the propagation as a one-sided skip free process and using matrix analytic methods, we find the final distribution for the position of informa- tion molecules. It will be shown that it is possible to keep molecules in a desired region. The effect of different parameters on the final distribution for the position of information molecules is investigated, too. This analysis can be useful in drug delivery applications. / February 2016
|
60 |
Planejamento econômico de gráficos de controle X para monitoramento de processos autocorrelacionadosFranco, Bruno Chaves [UNESP] 15 June 2011 (has links) (PDF)
Made available in DSpace on 2014-06-11T19:26:18Z (GMT). No. of bitstreams: 0
Previous issue date: 2011-06-15Bitstream added on 2014-06-13T19:25:33Z : No. of bitstreams: 1
franco_bc_me_guara.pdf: 1492178 bytes, checksum: 2c201ad6660278573573b0b255349982 (MD5) / Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES) / Esta pesquisa propõe o planejamento econômico de gráficos de controle ̅ para o monitoramento de uma característica de qualidade na qual as observações se ajustam a um modelo autorregressivo de primeira ordem com erro adicional. O modelo de custos de Duncan é usado para selecionar os parâmetros do gráfico, tamanho da amostra, intervalo de amostragem e os limites de controle. Utiliza-se o algoritmo genético na busca do custo mínimo de monitoramento. Para determinação do número médio de amostras até o sinal e o número de alarmes falsos são utilizadas Cadeias de Markov. Uma análise de sensibilidade mostrou que a autocorrelação provoca efeito adverso nos parâmetros do gráfico elevando seu custo de monitoramento e reduzindo sua eficiência / This research proposes an economic design of a ̅ control chart used to monitor a quality characteristic whose observations fit to a first-order autoregressive model with additional error. The Duncan's cost model is used to select the control chart parameters, namely the sample size, the sampling interval and the control limit coefficient, using genetic algorithm in order to search the minimum cost. The Markov chain is used to determine the average number of samples until the signal and the number of false alarms. A sensitivity analysis showed that the autocorrelation causes adverse effect on the parameters of the control chart increasing monitoring cost and reducing significantly their efficiency
|
Page generated in 0.0329 seconds