• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 21
  • 3
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 28
  • 28
  • 28
  • 12
  • 7
  • 7
  • 4
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Contributions to statistical learning and statistical quantification in nanomaterials

Deng, Xinwei 22 June 2009 (has links)
This research focuses to develop some new techniques on statistical learning including methodology, computation and application. We also developed statistical quantification in nanomaterials. For a large number of random variables with temporal or spatial structures, we proposed shrink estimates of covariance matrix to account their Markov structures. The proposed method exploits the sparsity in the inverse covariance matrix in a systematic fashion. To deal with high dimensional data, we proposed a robust kernel principal component analysis for dimension reduction, which can extract the nonlinear structure of high dimension data more robustly. To build a prediction model more efficiently, we developed an active learning via sequential design to actively select the data points into the training set. By combining the stochastic approximation and D-optimal designs, the proposed method can build model with minimal time and effort. We also proposed factor logit-models with a large number of categories for classification. We show that the convergence rate of the classifier functions estimated from the proposed factor model does not rely on the number of categories, but only on the number of factors. It therefore can achieve better classification accuracy. For the statistical nano-quantification, a statistical approach is presented to quantify the elastic deformation of nanomaterials. We proposed a new statistical modeling technique, called sequential profile adjustment by regression (SPAR), to account for and eliminate the various experimental errors and artifacts. SPAR can automatically detect and remove the systematic errors and therefore gives more precise estimation of the elastic modulus.
22

A proposed algorithm toward uniform-distribution monotone DNF learning

Bi, Wenzhu. January 2004 (has links)
Thesis (M.S.)--Duquesne University, 2004. / Title from document title page. Abstract included in electronic submission form. Includes bibliographical references (p. 24-25) and index.
23

Mathematical Theories of Interaction with Oracles

Yang, Liu 01 October 2013 (has links)
No description available.
24

Efficient recovery algorithms with restricted access to strings

Sinha, Sandip January 2022 (has links)
We design efficient algorithms for computational problems over strings in several models where the algorithms have limited access to the input. These models, and algorithms developed respecting these constraints, are becoming increasingly relevant due to the rapidly increasing size of datasets in myriad applications. Our first problem of interest is \emph{trace reconstruction}. This is an important problem in learning theory and coding theory, and has applications in computational biology. In this problem, the goal is to recover an unknown string given independent samples (\emph{traces}) of it generated via a probabilistic noise process called the deletion channel. We give state-of-the-art algorithms for this problem in several settings. Then we consider the problem of estimating the \emph{longest increasing subsequence (LIS)} of a given string in sublinear time, given query access to the string. While the LIS of a string can be computed exactly in near-linear time, the optimal complexity of approximating the LIS length, especially when the LIS is much less than the string length, is still open. We significantly improve upon prior work in terms of both approximation and time complexity in this regime. The runtime of our algorithm essentially matches the trivial query complexity lower bound as a function of the length of the LIS. Finally, we consider the problem of local decoding, or random access, on compressed strings. The Burrows-Wheeler Transform (BWT) is an important preprocessing step in lossless text compression that rearranges a string into runs of identical characters (by exploiting context regularities), resulting in highly compressible strings. However, the decoding process of the BWT is inherently sequential, and prevents fast random access to the original string. We design a succinct data structure for locally decoding short substrings (and answering several other queries) of a given string under its compressed BWT efficiently.
25

Generative fixation : a unified explanation for the adaptive capacity of simple recombinative genetic algorithms /

Burjorjee, Keki M. January 2009 (has links)
Thesis (Ph. D.)--Brandeis University, 2009. / "UMI:3369218." MICROFILM COPY ALSO AVAILABLE IN THE UNIVERSITY ARCHIVES. Includes bibliographical references.
26

Intractability Results for some Computational Problems

Ponnuswami, Ashok Kumar 08 July 2008 (has links)
In this thesis, we show results for some well-studied problems from learning theory and combinatorial optimization. Learning Parities under the Uniform Distribution: We study the learnability of parities in the agnostic learning framework of Haussler and Kearns et al. We show that under the uniform distribution, agnostically learning parities reduces to learning parities with random classification noise, commonly referred to as the noisy parity problem. Together with the parity learning algorithm of Blum et al, this gives the first nontrivial algorithm for agnostic learning of parities. We use similar techniques to reduce learning of two other fundamental concept classes under the uniform distribution to learning of noisy parities. Namely, we show that learning of DNF expressions reduces to learning noisy parities of just logarithmic number of variables and learning of k-juntas reduces to learning noisy parities of k variables. Agnostic Learning of Halfspaces: We give an essentially optimal hardness result for agnostic learning of halfspaces over rationals. We show that for any constant ε finding a halfspace that agrees with an unknown function on 1/2+ε fraction of examples is NP-hard even when there exists a halfspace that agrees with the unknown function on 1-ε fraction of examples. This significantly improves on a number of previous hardness results for this problem. We extend the result to ε = 2[superscript-Ω(sqrt{log n})] assuming NP is not contained in DTIME(2[superscript(log n)O(1)]). Majorities of Halfspaces: We show that majorities of halfspaces are hard to PAC-learn using any representation, based on the cryptographic assumption underlying the Ajtai-Dwork cryptosystem. This also implies a hardness result for learning halfspaces with a high rate of adversarial noise even if the learning algorithm can output any efficiently computable hypothesis. Max-Clique, Chromatic Number and Min-3Lin-Deletion: We prove an improved hardness of approximation result for two problems, namely, the problem of finding the size of the largest clique in a graph (also referred to as the Max-Clique problem) and the problem of finding the chromatic number of a graph. We show that for any constant γ > 0, there is no polynomial time algorithm that approximates these problems within factor n/2[superscript(log n)3/4+γ] in an n vertex graph, assuming NP is not contained in BPTIME(2[superscript(log n)O(1)]). This improves the hardness factor of n/2[superscript (log n)1-γ'] for some small (unspecified) constant γ' > 0 shown by Khot. Our main idea is to show an improved hardness result for the Min-3Lin-Deletion problem. An instance of Min-3Lin-Deletion is a system of linear equations modulo 2, where each equation is over three variables. The objective is to find the minimum number of equations that need to be deleted so that the remaining system of equations has a satisfying assignment. We show a hardness factor of 2[superscript sqrt{log n}] for this problem, improving upon the hardness factor of (log n)[superscriptβ] shown by Hastad, for some small (unspecified) constant β > 0. The hardness results for Max-Clique and chromatic number are then obtained using the reduction from Min-3Lin-Deletion as given by Khot. Monotone Multilinear Boolean Circuits for Bipartite Perfect Matching: A monotone Boolean circuit is said to be multilinear if for any AND gate in the circuit, the minimal representation of the two input functions to the gate do not have any variable in common. We show that monotone multilinear Boolean circuits for computing bipartite perfect matching require exponential size. In fact we prove a stronger result by characterizing the structure of the smallest monotone multilinear Boolean circuits for the problem.
27

Mathematical foundations of graded knowledge spaces

Bartl, Eduard. January 2009 (has links)
Thesis (Ph. D.)--State University of New York at Binghamton, Thomas J. Watson School of Engineering and Applied Science, Department of Systems Science and Industrial Engineering, 2009. / Includes bibliographical references.
28

Classifiers for Discrimination of Significant Protein Residues and Protein-Protein Interaction Using Concepts of Information Theory and Machine Learning / Klassifikatoren zur Unterscheidung von Signifikanten Protein Residuen und Protein-Protein Interaktion unter Verwendung von Informationstheorie und maschinellem Lernen

Asper, Roman Yorick 26 October 2011 (has links)
No description available.

Page generated in 0.1618 seconds