• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 254
  • 55
  • 32
  • 16
  • 15
  • 6
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 466
  • 69
  • 60
  • 60
  • 58
  • 55
  • 49
  • 46
  • 45
  • 40
  • 37
  • 35
  • 34
  • 33
  • 33
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
101

Connected Dominating Set Construction and Application in Wireless Sensor Networks

Wu, Yiwei 01 December 2009 (has links)
Wireless sensor networks (WSNs) are now widely used in many applications. Connected Dominating Set (CDS) based routing which is one kind of hierarchical methods has received more attention to reduce routing overhead. The concept of k-connected m-dominating sets (kmCDS) is used to provide fault tolerance and routing flexibility. In this thesis, we first consider how to construct a CDS in WSNs. After that, centralized and distributed algorithms are proposed to construct a kmCDS. Moreover, we introduce some basic ideas of how to use CDS in other potential applications such as partial coverage and data dissemination in WSNs.
102

Inferring Evolutionary Processes of Humans

Li, Sen January 2012 (has links)
More and more human genomic data has become available in recent years by the improvement of DNA sequencing technologies. These data provide abundant genetic variation information which is an important resource to help us to understand the evolutionary history of humans. In this thesis I evaluated the performance of the Approximate Bayesian Computation (ABC) approach for inferring demographic parameters for large-scale population genomic data. According to simulation results, I can conclude that the ABC approach will continue to be a useful tool for analysing realistic genome-wide population-genetic data in the post-genomic era. Secondly, I implemented the ABC approach to estimate the pre-historic events connected with the “Bantu-expansion”, the spread of peoples from West Africa. The analysis based on genetic data with a large number of loci support a rapid population growth in west Africans, which lead to their concomitant spread to southern and eastern Africa. Contrary to hypotheses based on language studies, I found that Bantu-speakers in south Africa likely migrated directly from west Africa, and not from east Africa. Thirdly, I evaluated Thomson's estimator of the time to most recent common ancestor (TMRCA). It is robust to different recombination rates and the least-biased compared to other commonly used approaches. I used the Thomson estimator to infer the genome-wide distribution of TMRCA for complete human genome sequence data in various populations from across the world and compare the result to simulated data. Finally, I investigated and analysed the effects of selection and demography on genetic polymorphism patterns. In particular, we could detect a clear signal in the distribution of TMRCA caused by selection for a constant-size population. However, if the population was growing, the signal of selection will be difficult to detect under some circumstances. I also discussed and gave a few suggestions that might lead to a more realistic path of successful identification of genes targeted by selection in large-scale genomic data.
103

Flexible techniques for heterogeneous XML data retrieval

Sanz Blasco, Ismael 31 October 2007 (has links)
The progressive adoption of XML by new communities of users has motivated the appearance of applications that require the management of large and complex collections, which present a large amount of heterogeneity. Some relevant examples are present in the fields of bioinformatics, cultural heritage, ontology management and geographic information systems, where heterogeneity is not only reflected in the textual content of documents, but also in the presence of rich structures which cannot be properly accounted for using fixed schema definitions. Current approaches for dealing with heterogeneous XML data are, however, mainly focused at the content level, whereas at the structural level only a limited amount of heterogeneity is tolerated; for instance, weakening the parent-child relationship between nodes into the ancestor-descendant relationship. The main objective of this thesis is devising new approaches for querying heterogeneous XML collections. This general objective has several implications: First, a collection can present different levels of heterogeneity in different granularity levels; this fact has a significant impact in the selection of specific approaches for handling, indexing and querying the collection. Therefore, several metrics are proposed for evaluating the level of heterogeneity at different levels, based on information-theoretical considerations. These metrics can be employed for characterizing collections, and clustering together those collections which present similar characteristics. Second, the high structural variability implies that query techniques based on exact tree matching, such as the standard XPath and XQuery languages, are not suitable for heterogeneous XML collections. As a consequence, approximate querying techniques based on similarity measures must be adopted. Within the thesis, we present a formal framework for the creation of similarity measures which is based on a study of the literature that shows that most approaches for approximate XML retrieval (i) are highly tailored to very specific problems and (ii) use similarity measures for ranking that can be expressed as ad-hoc combinations of a set of --basic' measures. Some examples of these widely used measures are tf-idf for textual information and several variations of edit distances. Our approach wraps these basic measures into generic, parametrizable components that can be combined into complex measures by exploiting the composite pattern, commonly used in Software Engineering. This approach also allows us to integrate seamlessly highly specific measures, such as protein-oriented matching functions.Finally, these measures are employed for the approximate retrieval of data in a context of highly structural heterogeneity, using a new approach based on the concepts of pattern and fragment. In our context, a pattern is a concise representations of the information needs of a user, and a fragment is a match of a pattern found in the database. A pattern consists of a set of tree-structured elements --- basically an XML subtree that is intended to be found in the database, but with a flexible semantics that is strongly dependent on a particular similarity measure. For example, depending on a particular measure, the particular hierarchy of elements, or the ordering of siblings, may or may not be deemed to be relevant when searching for occurrences in the database. Fragment matching, as a query primitive, can deal with a much higher degree of flexibility than existing approaches. In this thesis we provide exhaustive and top-k query algorithms. In the latter case, we adopt an approach that does not require the similarity measure to be monotonic, as all previous XML top-k algorithms (usually based on Fagin's algorithm) do. We also presents two extensions which are important in practical settings: a specification for the integration of the aforementioned techniques into XQuery, and a clustering algorithm that is useful to manage complex result sets.All of the algorithms have been implemented as part of ArHeX, a toolkit for the development of multi-similarity XML applications, which supports fragment-based queries through an extension of the XQuery language, and includes graphical tools for designing similarity measures and querying collections. We have used ArHeX to demonstrate the effectiveness of our approach using both synthetic and real data sets, in the context of a biomedical research project.
104

Reductions and Triangularizations of Sets of Matrices

Davidson, Colin January 2006 (has links)
Families of operators that are triangularizable must necessarily satisfy a number of spectral mapping properties. These necessary conditions are often sufficient as well. This thesis investigates such properties in finite dimensional and infinite dimensional Banach spaces. In addition, we investigate whether approximate spectral mapping conditions (being "close" in some sense) is similarly a sufficient condition.
105

Approximate Private Quantum Channels

Dickinson, Paul January 2006 (has links)
This thesis includes a survey of the results known for private and approximate private quantum channels. We develop the best known upper bound for &epsilon;-randomizing maps, <em>n</em> + 2log(1/&epsilon;) + <em>c</em> bits required to &epsilon;-randomize an arbitrary <em>n</em>-qubit state by improving a scheme of Ambainis and Smith [5] based on small bias spaces [16, 3]. We show by a probabilistic argument that in fact the great majority of random schemes using slightly more than this many bits of key are also &epsilon;-randomizing. We provide the first known nontrivial lower bound for &epsilon;-randomizing maps, and develop several conditions on them which we hope may be useful in proving stronger lower bounds in the future.
106

Reductions and Triangularizations of Sets of Matrices

Davidson, Colin January 2006 (has links)
Families of operators that are triangularizable must necessarily satisfy a number of spectral mapping properties. These necessary conditions are often sufficient as well. This thesis investigates such properties in finite dimensional and infinite dimensional Banach spaces. In addition, we investigate whether approximate spectral mapping conditions (being "close" in some sense) is similarly a sufficient condition.
107

Approximate Private Quantum Channels

Dickinson, Paul January 2006 (has links)
This thesis includes a survey of the results known for private and approximate private quantum channels. We develop the best known upper bound for &epsilon;-randomizing maps, <em>n</em> + 2log(1/&epsilon;) + <em>c</em> bits required to &epsilon;-randomize an arbitrary <em>n</em>-qubit state by improving a scheme of Ambainis and Smith [5] based on small bias spaces [16, 3]. We show by a probabilistic argument that in fact the great majority of random schemes using slightly more than this many bits of key are also &epsilon;-randomizing. We provide the first known nontrivial lower bound for &epsilon;-randomizing maps, and develop several conditions on them which we hope may be useful in proving stronger lower bounds in the future.
108

Adaptive Range Counting and Other Frequency-Based Range Query Problems

Wilkinson, Bryan T. January 2012 (has links)
We consider variations of range searching in which, given a query range, our goal is to compute some function based on frequencies of points that lie in the range. The most basic such computation involves counting the number of points in a query range. Data structures that compute this function solve the well-studied range counting problem. We consider adaptive and approximate data structures for the 2-D orthogonal range counting problem under the w-bit word RAM model. The query time of an adaptive range counting data structure is sensitive to k, the number of points being counted. We give an adaptive data structure that requires O(n loglog n) space and O(loglog n + log_w k) query time. Non-adaptive data structures on the other hand require Ω(log_w n) query time (Pătraşcu, 2007). Our specific bounds are interesting for two reasons. First, when k=O(1), our bounds match the state of the art for the 2-D orthogonal range emptiness problem (Chan et al., 2011). Second, when k=Θ(n), our data structure is tight to the aforementioned Ω(log_w n) query time lower bound. We also give approximate data structures for 2-D orthogonal range counting whose bounds match the state of the art for the 2-D orthogonal range emptiness problem. Our first data structure requires O(n loglog n) space and O(loglog n) query time. Our second data structure requires O(n) space and O(log^ε n) query time for any fixed constant ε>0. These data structures compute an approximation k' such that (1-δ)k≤k'≤(1+δ)k for any fixed constant δ>0. The range selection query problem in an array involves finding the kth lowest element in a given subarray. Range selection in an array is very closely related to 3-sided 2-D orthogonal range counting. An extension of our technique for 3-sided 2-D range counting yields an efficient solution to adaptive range selection in an array. In particular, we present an adaptive data structure that requires O(n) space and O(log_w k) query time, exactly matching a recent lower bound (Jørgensen and Larsen, 2011). We next consider a variety of frequency-based range query problems in arrays. We give efficient data structures for the range mode and least frequent element query problems and also exhibit the hardness of these problems by reducing Boolean matrix multiplication to the construction and use of a range mode or least frequent element data structure. We also give data structures for the range α-majority and α-minority query problems. An α-majority is an element whose frequency in a subarray is greater than an α fraction of the size of the subarray; any other element is an α-minority. Surprisingly, geometric insights prove to be useful even in the design of our 1-D range α-majority and α-minority data structures.
109

A study on the parameter estimation based on rounded data

Li, Gen-liang 21 January 2011 (has links)
Most recorded data are rounded to the nearest decimal place due to the precision of the recording mechanism. This rounding entails errors in estimation and measurement. In this paper, we compare the performances of three types of estimators based on rounded data from time series models, namely A-K corrected estimator, approximate MLE and the SOS estimator. In order to perform the comparison, the A-K corrected estimators for the MA(1) model are derived theoretically. To improve the efficiency of the estimation, two types of variance-reduction estimators are further proposed, which are based on linear combinations of aforementioned three estimators. Simulation results show the proposed variance reduction estimators significantly improve the estimation efficiency.
110

Exact D-optimal Designs for First-order Trigonometric Regression Models on a Partial Circle

Sun, Yi-Ying 24 June 2011 (has links)
Recently, various approximate design problems for low-degree trigonometric regression models on a partial circle have been solved. In this paper we consider approximate and exact optimal design problems for first-order trigonometric regression models without intercept on a partial circle. We investigate the intricate geometry of the non-convex exact trigonometric moment set and provide characterizations of its boundary. Building on these results we obtain a complete solution of the exact D-optimal design problem. It is shown that the structure of the optimal designs depends on both the length of the design interval and the number of observations.

Page generated in 0.076 seconds