• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 309
  • 67
  • 48
  • 32
  • 31
  • 18
  • 16
  • 14
  • 14
  • 9
  • 4
  • 3
  • 3
  • 2
  • 2
  • Tagged with
  • 707
  • 707
  • 374
  • 374
  • 153
  • 152
  • 105
  • 79
  • 69
  • 69
  • 66
  • 65
  • 64
  • 63
  • 60
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
61

Bank Credit Risk Measurement --- Application and Empirical of Markov Model

Yang, Tsung-Hsien 27 July 2004 (has links)
none
62

NONE

Huang, Chih-peng 27 July 2004 (has links)
NONE
63

An Analytical Model of Channel Preemption Mechanism for WLAN-embedded Cellular Networks

Wei, Wei-Feng 28 June 2007 (has links)
The rapid growth of wireless and cellular technologies in recent years has brought in various applications in our daily life. Thus, the integration between WLAN and cellular networks has attracted more and more attention to researchers. In this Thesis, we proposed a preemptive channel allocation mechanism for WLAN-embedded cellular networks. In such integrated networking environments, frequent handoffs may result in dramatic performance degradation. In our model, a mobile node first utilizes the cellular network for supporting high mobility. However, the capacity of a BS is easily saturated. To minimize session blocking, a mobile node outside the WLAN coverage can preempt the channel(s) occupied by a mobile node inside the WLAN coverage. The preempted mobile node can still get access to the Internet through the AP of WLAN. For the purpose of performance evaluation, we build a three-dimension Markov Chain to analyze the proposed mechanism. We derive the residence time inside the WLAN coverage and outside the WLAN coverage, respectively. Finally, we evaluate the overall network performance in terms of the number of active sessions over WLAN, the channel utilization of a BS, the probability of session blocking, the preemption probability, and the preempted probability. From the evaluation, we observe the relative performance improvements of our proposed channel preemption mechanisms.
64

Queueing Analysis of CDMA Unslotted ALOHA Systems with Finite Buffers

Okada, Hiraku, Yamazato, Takaya, Katayama, Masaaki, Ogawa, Akira 10 1900 (has links)
No description available.
65

A Dynamic Channel Allocation Mechanism with Priorities in Wireless Networks

Lin, Hsin-Yuan 27 July 2000 (has links)
Pico-Cellular architecture fully reuses frequency to increase network capacity. However, it will increase the occurance of Handoff due to the small range of cell. Previous works in channel allocations can reduce blocking probability of handoff call, but it may increase blocking probability of new call. As a result, channel utilization is decreased because they can not adapt to network changes. In this thesis, we present a Dynamic Channel Allocation Mechanism with priority support. All channels and calls are divided into high and low priority. If there is no high_priority channel for high_priority call, high_priority call may downgrade its priority by sacrificing some QoS to utilize low_priority channels. We define two new array for network information status, one is next_cell state, and the other is the transition probability. Next_cell state is used to save prior M Cell_Ids where handoff calls may move to. Transition probability is used to save the probabilities for active calls moving to other neighboring cells. According to next_cell state and transition probability, we can accurately predict the probabilities for mobile hosts moving to other neighboring cells. Therefore, we can dynamically adjust bandwidth reservation requests sending to neighboring cells by the latest transition probability and the number of active calls in this cell. We analyze the proposed mechanism through a mathematical model. In the model, we build a four-dimension Markov Chain and use MATLAB[41] tool to evaluate blocking probability, channel throughput and utilization. We found out that blocking probability of handoff call can be decreased and channel utilization can be increased through the proposed channel allocation mechanisms with high and low priority support.
66

Risk Management of Stochastic Investment -- with the Instance of Public Welfare Lottery in Taiwan

Chen, Wen-Tai 30 July 2003 (has links)
Abstract With the economic growth as well as the advent of the post-industry age in the 21st century, a variety of investment patterns for people to employ progressively appear. Moreover, there is a tendency in which people¡¦s traditional and conservative manners for investments gradually transform into aggressive ones with a preference of risks. For instance, it has been impossible nowadays for certain simple finance-managing mechanisms, such as the Certificate of Deposit, the Bond or even Stock Markets, to satisfy general investors¡¦ needs. Therefore, more and more financial products have become available, especially a lot of derivative securities like options, futures, or stock options. Furthermore, the activities of horseracing, dog racing or even lottery are no longer taboos in Taiwan. As far as the lotteries in most of the counties are concerned, the government is usually the main host, regarding them as additional effortless sources of tax revenue. During the earlier periods in Taiwan, the only lottery available was the national ¡§Ai-Guo¡¨ lottery issued by the government. Unfortunately, a lot of unlawful gambling gangs took advantage of the popularity of ¡§Ai-Guo¡¨ lottery by utilizing it as the sources of the so-called ¡§Da-Jia-Le¡¨ lottery, which was an illegal, yet popular underground gambling activity. The ¡§Da-Jia-Le¡¨ lottery not only corrupted the social values and caused a lot of criminal actions, but also made an acute and serious impact on regular economic operations. Consequently, the government resolutely called a halt on the ¡§Ai-Guo¡¨ lottery. Nevertheless, the prevailing gambling mania did decline; on the contrary, it further transformed into a covert and under-the-counter gambling operation attached to the Hong Kong lottery. The craze almost swept the entire country, resulting in more and more social problems and having troubled the authorities for a long period of time. To settle such a predicament, government decided to refer to the experiences of other countries, beginning to launch local lotteries in Taiwan, instead of prohibiting people from attending illicit gambling activities. On the one hand, the government would be able to set the lottery business back on right track and eliminate the root of the underground economic operations, which brought about many social problems. On the other hand, the local lottery would provide extra tax revenue for the administrations as well as more employment opportunities for handicapped people. Accordingly, Taiwanese Lottery was launched in the beginning of the year 2002. Until now, mainly due to the attraction of the surprisingly huge bonus shares offered by the lottery¡¦s highest prize, the public response is overwhelming. The social environment in which the stochastic investment actions become authorized and gradually prevail directly triggers the motivation of this research in further investigating the feasibility of the investment. The research takes Taiwan Public Welfare Lottery as an investment research case, exploring the game rules of lottery in an attempt to enhance the probability of winning prizes and further testing and evaluating the effectiveness of various simple biting tools adopted by the general public in order to obtain the most effective biting ways for individual investor¡¦s reference. The process of this study is as follows: 1. Testing the randomness of the winning prize numbers in Taiwan Public Welfare Lottery. 2. Estimating the possible distribution of the population via the statistical application. 3. Using the cluster analysis to induce the sub-cluster of the selected numbers according to the patterns of the historical prize numbers. 4. Making use of Markov Chain to select the sub-cluster, which is within an investor¡¦s upper limit of risks. And consequently, this study reveals the followings: 1. The winning prize numbers are random within study samples. 2. The statistic order of the winning prize numbers is £] distribution. 3. The cluster effect is slightly enhanced when the sub-cluster is chosen through the cluster analysis rather than random ways. 4. The application of Markov Chain has an obvious reinforcement on the selection of sub-cluster. 5. The so-called ¡§Smart Package Biting¡¨, which is adopted by the general public, cannot improve the winning possibility; however, depending on individual investor¡¦s needs, it could be a useful tool for risk management. Through a seemingly aimless random event, the research explores the possible development of the modeling process, so as to comprehend and experience a manager¡¦s mental course on decision making in a dynamic, capricious or even chaotic managerial environment.
67

The effects of bias on sampling algorithms and combinatorial objects

Miracle, Sarah 08 June 2015 (has links)
Markov chains are algorithms that can provide critical information from exponentially large sets efficiently through random sampling. These algorithms are ubiquitous across numerous scientific and engineering disciplines, including statistical physics, biology and operations research. In this thesis we solve sampling problems at the interface of theoretical computer science with applied computer science, discrete mathematics, statistical physics, chemistry and economics. A common theme throughout each of these problems is the use of bias. The first problem we study is biased permutations which arise in the context of self-organizing lists. Here we are interested in the mixing time of a Markov chain that performs nearest neighbor transpositions in the non-uniform setting. We are given "positively biased'' probabilities $\{p_{i,j} \geq 1/2 \}$ for all $i < j$ and let $p_{j,i} = 1-p_{i,j}$. In each step, the chain chooses two adjacent elements~$k,$ and~$\ell$ and exchanges their positions with probability $p_{ \ell, k}$. We define two general classes of bias and give the first proofs that the chain is rapidly mixing for both. We also demonstrate that the chain is not always rapidly mixing by constructing an example requiring exponential time to converge to equilibrium. Next we study rectangular dissections of an $n \times n$ lattice region into rectangles of area $n$, where $n=2^k$ for an even integer $k.$ We consider a weighted version of a natural edge flipping Markov chain where, given a parameter $\lambda > 0,$ we would like to generate each rectangular dissection (or dyadic tiling)~$\sigma$ with probability proportional to $\lambda^{|\sigma|},$ where $|\sigma|$ is the total edge length. First we look at the restricted case of dyadic tilings, where each rectangle is required to have the form $R = [s2^{u},(s+1)2^{u}]\times [t2^{v},(t+1)2^{v}],$ where $s, t, u$ and~$v$ are nonnegative integers. Here we show there is a phase transition: when $\lambda < 1,$ the edge-flipping chain mixes in time $O(n^2 \log n)$, and when $\lambda > 1,$ the mixing time is $\exp(\Omega({n^2}))$. The behavior for general rectangular dissections is more subtle, and we show the chain requires exponential time when $\lambda >1$ and when $\lambda <1.$ The last two problems we study arise directly from applications in chemistry and economics. Colloids are binary mixtures of molecules with one type of molecule suspended in another. It is believed that at low density typical configurations will be well-mixed throughout, while at high density they will separate into clusters. We characterize the high and low density phases for a general family of discrete interfering colloid models by showing that they exhibit a "clustering property" at high density and not at low density. The clustering property states that there will be a region that has very high area to perimeter ratio and very high density of one type of molecule. A special case is mixtures of squares and diamonds on $\Z^2$ which correspond to the Ising model at fixed magnetization. Subsequently, we expanded techniques developed in the context of colloids to give a new rigorous underpinning to the Schelling model, which was proposed in 1971 by economist Thomas Schelling to understand the causes of racial segregation. Schelling considered residents of two types, where everyone prefers that the majority of his or her neighbors are of the same type. He showed through simulations that even mild preferences of this type can lead to segregation if residents move whenever they are not happy with their local environments. We generalize the Schelling model to include a broad class of bias functions determining individuals happiness or desire to move. We show that for any influence function in this class, the dynamics will be rapidly mixing and cities will be integrated if the racial bias is sufficiently low. However when the bias is sufficiently high, we show the dynamics take exponential time to mix and a large cluster of one type will form.
68

The Distribution of the Length of the Longest Increasing Subsequence in Random Permutations of Arbitrary Multi-sets

Al-Meanazel, Ayat 07 October 2015 (has links)
The distribution theory of runs and patterns has a long and rich history. In this dissertation we study the distribution of some run-related statistics in sequences and random permutations of arbitrary multi-sets. Using the finite Markov chain imbedding technique (FMCI), which was proposed by Fu and Koutras (1994), we proposed an alternative method to calculate the exact distribution of the total number of adjacent increasing and adjacent consecutive increasing subsequences in sequences. Fu and Hsieh (2015) obtained the exact distribution of the length of the longest increasing subsequence in random permutations. To the best of our knowledge, little or no work has been done on the exact distribution of the length of the longest increasing subsequence in random permutations of arbitrary multi-sets. Here we obtained the exact distribution of the length of the longest increasing subsequence in random permutations of arbitrary multi-sets. We also obtain the the exact distribution of the length of the longest increasing subsequence for the set of all permutations of length N generated from {1,2,...,n}. / February 2016
69

Fast Algorithms for Large-Scale Phylogenetic Reconstruction

Truszkowski, Jakub January 2013 (has links)
One of the most fundamental computational problems in biology is that of inferring evolutionary histories of groups of species from sequence data. Such evolutionary histories, known as phylogenies are usually represented as binary trees where leaves represent extant species, whereas internal nodes represent their shared ancestors. As the amount of sequence data available to biologists increases, very fast phylogenetic reconstruction algorithms are becoming necessary. Currently, large sequence alignments can contain up to hundreds of thousands of sequences, making traditional methods, such as Neighbor Joining, computationally prohibitive. To address this problem, we have developed three novel fast phylogenetic algorithms. The first algorithm, QTree, is a quartet-based heuristic that runs in O(n log n) time. It is based on a theoretical algorithm that reconstructs the correct tree, with high probability, assuming every quartet is inferred correctly with constant probability. The core of our algorithm is a balanced search tree structure that enables us to locate an edge in the tree in O(log n) time. Our algorithm is several times faster than all the current methods, while its accuracy approaches that of Neighbour Joining. The second algorithm, LSHTree, is the first sub-quadratic time algorithm with theoretical performance guarantees under a Markov model of sequence evolution. Our new algorithm runs in O(n^{1+γ(g)} log^2 n) time, where γ is an increasing function of an upper bound on the mutation rate along any branch in the phylogeny, and γ(g) < 1 for all g. For phylogenies with very short branches, the running time of our algorithm is close to linear. In experiments, our prototype implementation was more accurate than the current fast algorithms, while being comparably fast. In the final part of this thesis, we apply the algorithmic framework behind LSHTree to the problem of placing large numbers of short sequence reads onto a fixed phylogenetic tree. Our initial results in this area are promising, but there are still many challenges to be resolved.
70

The Application of Markov Chain Monte Carlo Techniques in Non-Linear Parameter Estimation for Chemical Engineering Models

Mathew, Manoj January 2013 (has links)
Modeling of chemical engineering systems often necessitates using non-linear models. These models can range in complexity, from a simple analytical equation to a system of differential equations. Regardless of what type of model is being utilized, determining parameter estimates is essential in everyday chemical engineering practice. One promising approach to non-linear regression is a technique called Markov Chain Monte Carlo (MCMC).This method produces reliable parameter estimates and generates joint confidence regions (JCRs) with correct shape and correct probability content. Despite these advantages, its application in chemical engineering literature has been limited. Therefore, in this project, MCMC methods were applied to a variety of chemical engineering models. The objectives of this research is to (1) illustrate how to implement MCMC methods in complex non-linear models (2) show the advantages of using MCMC techniques over classical regression approaches and (3) provide practical guidelines on how to reduce the computational time. MCMC methods were first applied to the biological oxygen demand (BOD) problem. In this case study, an implementation procedure was outlined using specific examples from the BOD problem. The results from the study illustrated the importance of estimating the pure error variance as a parameter rather than fixing its value based on the mean square error. In addition, a comparison was carried out between the MCMC results and the results obtained from using classical regression approaches. The findings show that although similar point estimates are obtained, JCRs generated from approximation methods cannot model the parameter uncertainty adequately. Markov Chain Monte Carlo techniques were then applied in estimating reactivity ratios in the Mayo-Lewis model, Meyer-Lowry model, the direct numerical integration model and the triad fraction multiresponse model. The implementation steps for each of these models were discussed in detail and the results from this research were once again compared to previously used approximation methods. Once again, the conclusion drawn from this work showed that MCMC methods must be employed in order to obtain JCRs with the correct shape and correct probability content. MCMC methods were also applied in estimating kinetic parameter used in the solid oxide fuel cell study. More specifically, the kinetics of the water-gas shift reaction, which is used in generating hydrogen for the fuel cell, was studied. The results from this case study showed how the MCMC output can be analyzed in order to diagnose parameter observability and correlation. A significant portion of the model needed to be reduced due to these issues of observability and correlation. Point estimates and JCRs were then generated using the reduced model and diagnostic checks were carried out in order to ensure the model was able to capture the data adequately. A few select parameters in the Waterloo Polymer Simulator were estimated using the MCMC algorithm. Previous studies have shown that accurate parameter estimates and JCRs could not be obtained using classical regression approaches. However, when MCMC techniques were applied to the same problem, reliable parameter estimates and correct shape and correct probability content confidence regions were observed. This case study offers a strong argument as to why classical regression approaches should be replaced by MCMC techniques. Finally, a very brief overview of the computational times for each non-linear model used in this research was provided. In addition, a serial farming approach was proposed and a significant decrease in computational time was observed when this procedure was implemented.

Page generated in 0.0438 seconds