111 |
A Collapsing Method for Efficient Recovery of Optimal EdgesHu, Mike January 2002 (has links)
In this thesis we present a novel algorithm, <I>HyperCleaning*</I>, for effectively inferring phylogenetic trees. The method is based on the quartet method paradigm and is guaranteed to recover the best supported edges of the underlying phylogeny based on the witness quartet set.
This is performed efficiently using a collapsing mechanism that employs memory/time tradeoff to ensure no loss of information. This enables <I>HyperCleaning*</I> to solve the relaxed version of the Maximum-Quartet-Consistency problem feasibly, thus providing a valuable tool for inferring phylogenies using quartet based analysis.
|
112 |
Bayesian Analysis of Intratumoural Oxygen DataTang, Herbert Hoi Chi January 2009 (has links)
There is now ample evidence to support the notion that a lack of oxygen (hypoxia) within the tumour adversely affects the outcome of radiotherapy and whether a patient is able to remain disease free. Thus, there is increasing interest in accurately determining oxygen concentration levels within a tumour. Hypoxic regions arise naturally in cancerous tumours because of their abnormal vasculature and it is believed that oxygen is necessary in order for radiation to be effective in killing cancer cells. One method of measuring oxygen concentration within a tumour is the Eppendorf polarographic needle electrode; a method that is favored by many clinical researchers because it is the only device that is inserted directly into the tumour, and reports its findings in terms of oxygen partial pressure (PO2). Unfortunately, there are often anomalous readings in the Eppendorf measurements (negative and extremely high values) and there is little consensus as to how best to interpret the data. In this thesis, Bayesian methods are applied to estimate two measures commonly used to quantify oxygen content within a tumour in the current literature: the median PO2, and Hypoxic Proportion (HP5), the percentage of readings less than 5mmHg. The results will show that Bayesian methods of parameter estimation are able to reproduce the standard estimate for HP5 while providing an additional piece of information, the error bar, that quantifies how uncertain we believe our estimate to be. Furthermore, using the principle of Maximum Entropy, we will estimate the true median PO2 of the distribution instead of simply relying on the sample median, a value which may or may not be an accurate indication of the actual median PO2 inside the tumour. The advantage of the Bayesian method is that it takes advantage of probability theory and presents its results in the form of probability density functions. These probability density functions provide us with more information about the desired quantity than the single number that is produced in the current literature and allows us to make more accurate and informative statements about the measure of hypoxia that we are trying to estimate.
|
113 |
Convex relaxation for the planted clique, biclique, and clustering problemsAmes, Brendan January 2011 (has links)
A clique of a graph G is a set of pairwise adjacent nodes of G. Similarly, a biclique (U, V ) of a bipartite graph G is a pair of disjoint, independent vertex sets such that each node in U is adjacent to every node in V in G. We consider the problems of identifying the maximum clique of a graph, known as the maximum clique problem, and identifying the biclique (U, V ) of a bipartite graph that maximizes the product |U | · |V |, known as the maximum edge biclique problem. We show that finding a clique or biclique of a given size in a graph is equivalent to finding a rank one matrix satisfying a particular set of linear constraints. These problems can be formulated as rank minimization problems and relaxed to convex programming by replacing rank with its convex envelope, the nuclear norm. Both problems are NP-hard yet we show that our relaxation is exact in the case that the input graph contains a large clique or biclique plus additional nodes and edges. For each problem, we provide two analyses of when our relaxation is exact. In the first,
the diversionary edges are added deterministically by an adversary. In the second, each potential edge is added to the graph independently at random with fixed probability p. In the random case, our bounds match the earlier bounds of Alon, Krivelevich, and Sudakov, as well as Feige and Krauthgamer for the maximum clique problem.
We extend these results and techniques to the k-disjoint-clique problem. The maximum node k-disjoint-clique problem is to find a set of k disjoint cliques of a given input graph containing the maximum number of nodes. Given input graph G and nonnegative edge
weights w, the maximum mean weight k-disjoint-clique problem seeks to identify the set of k disjoint cliques of G that maximizes the sum of the average weights of the edges, with respect to w, of the complete subgraphs of G induced by the cliques. These problems may be considered as a way to pose the clustering problem. In clustering, one wants to partition a given data set so that the data items in each partition or cluster are similar and the items in different clusters are dissimilar. For the graph G such that the set of nodes represents a given data set and any two nodes are adjacent if and only if the corresponding items are similar, clustering the data into k disjoint clusters is equivalent to partitioning G into k-disjoint cliques. Similarly, given a complete graph with nodes corresponding to a given data set and edge weights indicating similarity between each pair of items, the data may be clustered by solving the maximum mean weight k-disjoint-clique problem.
We show that both instances of the k-disjoint-clique problem can be formulated as rank constrained optimization problems and relaxed to semidefinite programs using the nuclear norm relaxation of rank. We also show that when the input instance corresponds to a collection of k disjoint planted cliques plus additional edges and nodes, this semidefinite relaxation is exact for both problems. We provide theoretical bounds that guarantee exactness of our relaxation and provide empirical examples of successful applications of our algorithm to synthetic data sets, as well as data sets from clustering applications.
|
114 |
A comparably robust approach to estimate the left-censored data of trace elements in Swedish groundwaterLi, Cong January 2012 (has links)
Groundwater data in this thesis, which is taken from the database of Sveriges Geologiska Undersökning, characterizes chemical and quantitative status of groundwater in Sweden. The data usually is recorded with only quantification limits when it is below certain values. Accordingly, this thesis is aiming at handling such kind of data. The thesis considers this topic by using the EM algorithm to get the results from maximum likelihood estimation. Consequently, estimations of distributions on censored data of trace elements are expounded on. Related simulations show that the estimation is acceptable.
|
115 |
Are seals willing to pay for access to artificial kelp and live fish?Ruotimaa, Jenny January 2007 (has links)
Environmental enrichment (EE) is used to improve the wellbeing of animals in human care. One way of testing what resources an animal prefers to have access to, is to make it pay a price. The price is in the form of time or energy spent to get access to the resource. When measuring the motivation of animals it is useful to compare the resource which is to be evaluated to a resource with a known value. Food is often the comparator. The maximum price paid approach measures the highest price an animal is willing to pay for access to a resource. In this study the motivation of a grey seal (Halichoerus grypus) for getting access to artificial kelp and live fish was measured. Food was used as the comparator. A large net cage with a weighted entrance and a nonweighted exit gate was used as the test arena. The seal had to enter it by opening the entrance gate which had increasing weights every day, in 10 steps up to 65 kg. The seal was not willing to pay any price for the live fish. The maximum price paid for the food was 60kg, and for the artificial kelp 10kg, i.e. 17% of the maximum price paid for food. The results suggest that neither live fish nor artificial kelp was an attractive EE for this seal. However, the study also shows that spring (reproductive period) is not a good time to test motivation in grey seals.
|
116 |
Maximum Norm Regularity of Implicit Difference Methods for Parabolic EquationsPruitt, Michael January 2011 (has links)
<p>We prove maximum norm regularity properties of L-stable finite difference</p><p>methods for linear-second order parabolic equations with coefficients</p><p>independent of time, valid for large time steps. These results are almost</p><p>sharp; the regularity property for first differences of the numerical solution</p><p>is of the same form as that of the continuous problem, and the regularity</p><p>property for second differences is the same as the continuous problem except for</p><p>logarithmic factors. </p><p>This generalizes a result proved by Beale valid for the constant-coefficient</p><p>diffusion equation, and is in the spirit of work by Aronson, Widlund and</p><p>Thomeé.</p><p>To prove maximum norm regularity properties for the homogeneous problem, </p><p> we introduce a semi-discrete problem (discrete in space, continuous in time).</p><p>We estimate the semi-discrete evolution operator and its spatial differences on</p><p>a sector of the complex plan by constructing a fundamental solution.</p><p>The semi-discrete fundamental solution is obtained from the fundamental solution to the frozen coefficient problem by adding a correction term found through an iterative process.</p><p>From the bounds obtained on the evolution operator and its spatial differences,</p><p>we find bounds</p><p>on the resolvent of the discrete elliptic operator and its differences through</p><p>the Laplace transform</p><p>representation of the resolvent. Using the resolvent estimates and the</p><p>assumed stability properties of the time-stepping method in the Cauchy integral</p><p>representation of the fully discrete solution operator</p><p>yields the homogeneous regularity result.</p><p>Maximum norm regularity results for the inhomogeneous</p><p>problem follow from the homogeneous results using Duhamel's principle. The results for the inhomogeneous</p><p>problem</p><p>imply that when the time step is taken proportional to the grid width, the rate of convergence of the numerical solution and its first</p><p>differences is second-order in space, and the rate of convergence for second</p><p>differences</p><p>is second-order except for logarithmic factors .</p><p>As an application of the theory, we prove almost sharp maximum norm resolvent estimates for divergence</p><p>form elliptic operators on spatially periodic grid functions. Such operators are invertible, with inverses and their first differences bounded in maximum norm, uniformly in the grid width. Second differences of the inverse operator are bounded except for logarithmic factors.</p> / Dissertation
|
117 |
Ripple Current Effect on Output Power of Solar CellLin, Shin-Li 25 July 2012 (has links)
This thesis investigates the effect of the ripple current on the output power of solar cells. A solar panel with several metal halide lamps is set up to emulate the photovoltaic power system, which is cascaded by a boost converter and a buck-boost converter to extract triangular and trapezoidal currents, respectively. All experiments are operated under the room temperature with different current ripples and frequencies. The measured current and voltage waveforms at the output powers indicate that the dynamic characteristics are very different from static ones obtained from the dc loads. It is found that the output voltage lags the current when the peak of the rippled current goes beyond the maximum power point (MPP), leading to a declination in the average output power. This phenomenon becomes more severe for a higher peak, lower frequency, and larger charge of the rippled current exceeding the MPP. In addition, the declination in the average power may cause a shift of the MPP.
|
118 |
Improving Network Reliability: Analysis, Methodology, and AlgorithmsBooker, Graham B. 2010 May 1900 (has links)
The reliability of networking and communication systems is vital for the nation's
economy and security. Optical and cellular networks have become a critical infrastructure
and are indispensable in emergency situations. This dissertation outlines
methods for analyzing such infrastructures in the presence of catastrophic failures,
such as a hurricane, as well as accidental failures of one or more components. Additionally,
it presents a method for protecting against the loss of a single link in a
multicast network along with a technique that enables wireless clients to efficiently
recover lost data sent by their source through collaborative information exchange.
Analysis of a network's reliability during a natural disaster can be assessed by
simulating the conditions in which it is expected to perform. This dissertation conducts
the analysis of a cellular infrastructure in the aftermath of a hurricane through
Monte-Carlo sampling and presents alternative topologies which reduce resulting loss
of calls. While previous research on restoration mechanisms for large-scale networks
has mostly focused on handling the failures of single network elements, this dissertation
examines the sampling methods used for simulating multiple failures. We present
a quick method of nding a lower bound on a network's data loss through enumeration
of possible cuts as well as an efficient method of nding a tighter lower bound
through genetic algorithms leveraging the niching technique.
Mitigation of data losses in a multicast network can be achieved by adding redundancy
and employing advanced coding techniques. By using Maximum Rank Distance (MRD) codes at the source, a provider can create a parity packet which is
e ectively linearly independent from the source packets such that all packets may be
transmitted through the network using the network coding technique. This allows
all sinks to recover all of the original data even with the failure of an edge within
the network. Furthermore, this dissertation presents a method that allows a group of
wireless clients to cooperatively recover from erasures (e.g., due to failures) by using
the index coding techniques.
|
119 |
Generalized Maximum-Likelihood Algorithm for Time Delay Estimation in UWB RadioTsai, Wen-Chieh 24 July 2004 (has links)
The main purpose of this thesis is to estimate the direct path in dense multipath Ultra Wide-Band (UWB) environment. The time-of-arrival (ToA) estimation algorithm used is based on Generalized Maximum-Likelihood (GML) algorithm. Nevertheless, GML algorithm is so time-consuming that the results usually take a very long period of time, and sometimes fail to converge. Hence, the schemes that would improve the algorithm are investigated. In the schemes, the search was executed in sequential form. Two threshold parameters are to be determined¡Xone is about the arrival time of the estimation path while the other is the fading amplitude of the estimation path. The thresholds are determined in order to terminate the sequential algorithm. The determination of thresholds is based on error analysis, including the probability of error and root-mean-square error. The analysis of the probability of error is subject to the probability of false alarm and the probability of miss. However, a trade-off problem on the probability of false alarm and the probability of miss exists in the process of determining thresholds. The thresholds are determined according to the requirement of the probability of error. We propose an improvement scheme for determining the two thresholds. In the proposed scheme, candidate pairs are evaluated within an appropriate range. The root-mean-square error value for each pair of thresholds is calculated. The smallest error, corresponding to the desired thresholds, is chosen for use in ToA estimation. From the simulation results, it is seen that, when SNR falls between -4dB and 16dB, the improvement proposed scheme results has the smaller estimation error.
|
120 |
On the Forming Causes and Strategies of Unresolved Cases in Executing the Monetary Payment Duty in Public LawQiu, Qi-Hong 16 July 2007 (has links)
¡@Ever since the foundation in 2000, the branches of Administrative Enforcement Agency have been handling the cases of monetary payment duty in public law with the total amount reaching 3,0178,624, by only about the workforce of 400 people in average while bringing in 1,133,000,000 financial income for the nation in six years. Among these cases, 24,386,174 are closed, which are more effective than in the past and helpful in keeping the publics from evading their duties by luck and therefore realizing the social fairness and justice, thus manifesting the public authority. However, there are still many problems hiding behind the new organizations and their systems, as the number indicates: unresolved cases reach 5,809,904 while 3,732 1,1609,753 dollars are unexecuted.
In order to reduce the number of unclosed cases and enhance the administrative efficiency from two indexes: rate of the case-closing and levied tax, the thesis had combined the transaction cost theory from the Coase Theorem with the concept of opportunity cost in law economics, and had discussed the forming causes and strategies from three aspects under the principle of maximum efficiency, while not omitting the point of view of the public choices by the people of the duty. Three main conclusions are therefore made as follow:
Firstly, speaking of system in terms of case management and evaluation, the amount of case closed must outnumbers the case received, while different types of cases (according to the amount of money) must be reduced in proportion; moreover, specific execution procedure should be arranged according to its types, and to indicate a clear termination of the time rather than being delayed for too long.
Secondly, as far as the case efficiency is concerned, the more payment channel ,the better. In addition, by using information technology, it is effective in exchanging the information among different individuals, and speeding up the case to be closed.
Thirdly, in the part of case receiving, be sure to monitor the loading of cases according to the workforce and to adjust the human resource or control the speed of case receiving. Setting up the voucher re-transfer evaluation mechanism to avoid the running-empty of administrative procedure and the waste of resource; to build up the pre-case database and to provide decision support system of the case execution so as to reduce the cost of research and decision making.
|
Page generated in 0.0434 seconds