• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 23
  • 9
  • 3
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 42
  • 42
  • 30
  • 15
  • 9
  • 9
  • 9
  • 9
  • 8
  • 8
  • 6
  • 5
  • 5
  • 5
  • 5
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Unsupervised Activity Discovery and Characterization for Sensor-Rich Environments

Hamid, Muhammad Raffay 28 November 2005 (has links)
This thesis presents an unsupervised method for discovering and analyzing the different kinds of activities in an active environment. Drawing from natural language processing, a novel representation of activities as bags of event n-grams is introduced, where the global structural information of activities using their local event statistics is analyzed. It is demonstrated how maximal cliques in an undirected edge-weighted graph of activities, can be used in an unsupervised manner, to discover the different activity-classes. Taking on some work done in computer networks and bio-informatics, it is shown how to characterize these discovered activity-classes from a wholestic as well as a by-parts view-point. A definition of anomalous activities is formulated along with a way to detect them based on the difference of an activity instance from each of the discovered activity-classes. Finally, an information theoretic method to explain the detected anomalies in a human-interpretable form is presented. Results over extensive data-sets, collected from multiple active environments are presented, to show the competence and generalizability of the proposed framework.
22

The Approximability of Learning and Constraint Satisfaction Problems

Wu, Yi 07 October 2010 (has links)
An α-approximation algorithm is an algorithm guaranteed to output a solutionthat is within an α ratio of the optimal solution. We are interested in thefollowing question: Given an NP-hard optimization problem, what is the bestapproximation guarantee that any polynomial time algorithm could achieve? We mostly focus on studying the approximability of two classes of NP-hardproblems: Constraint Satisfaction Problems (CSPs) and Computational Learning Problems. For CSPs, we mainly study the approximability of MAX CUT, MAX 3-CSP,MAX 2-LINR, VERTEX-PRICING, as well as serval variants of the UNIQUEGAMES.• The problem of MAX CUT is to find a partition of a graph so as to maximizethe number of edges between the two partitions. Assuming theUnique Games Conjecture, we give a complete characterization of the approximationcurve of the MAX CUT problem: for every optimum value ofthe instance, we show that certain SDP algorithm with RPR2 roundingalways achieve the optimal approximation curve.• The input to a 3-CSP is a set of Boolean constraints such that each constraintcontains at most 3 Boolean variables. The goal is to find an assignmentto these variables to maximize the number of satisfied constraints.We are interested in the case when a 3-CSP is satisfiable, i.e.,there does exist an assignment that satisfies every constraint. Assumingthe d-to-1 conjecture (a variant of the Unique Games Conjecture), weprove that it is NP-hard to give a better than 5/8-approximation for theproblem. Such a result matches a SDP algorithm by Zwick which givesa 5/8-approximation problem for satisfiable 3-CSP. In addition, our resultalso conditionally resolves a fundamental open problem in PCP theory onthe optimal soundness for a 3-query nonadaptive PCP system for NP withperfect completeness.• The problem of MAX 2-LINZ involves a linear systems of integer equations;these equations are so simple such that each equation contains atmost 2 variables. The goal is to find an assignment to the variables so asto maximize the total number of satisfied equations. It is a natural generalizationof the Unique Games Conjecture which address the hardness ofthe same equation systems over finite fields. We show that assuming theUnique Games Conjecture, for a MAX 2-LINZ instance, even that thereexists a solution that satisfies 1−ε of the equations, it is NP-hard to findone that satisfies ² of the equations for any ε > 0.
23

Intractability results for problems in computational learning and approximation

Saket, Rishi 29 June 2009 (has links)
In this thesis we prove intractability results for well studied problems in computational learning and approximation. Let ε , mu > 0 be arbitrarily small constants and t be an arbitrary constant positive integer. We show an almost optimal hardness factor of d[superscript{1-ε}] for computing an equivalent DNF expression with minimum terms for a boolean function on d variables, given its truth table. In the study of weak learnability, we prove an optimal 1/2 + ε inapproximability for the accuracy of learning an intersection of two halfspaces with an intersection of t halfspaces. Further, we study the learnability of small DNF formulas, and prove optimal 1/2 + ε inapproximability for the accuracy of learning (i) a two term DNF by a t term DNF, and (ii) an AND under adversarial mu-noise by a t-CNF. In addition, we show a 1 - 2[superscript{-d}] + ε inapproximability for accurately learning parities (over GF(2)), under adversarial mu-noise, by degree d polynomials, where d is a constant positive integer. We also provide negative answers to the possibility of stronger semi-definite programming (SDP) relaxations yielding much better approximations for graph partitioning problems such as Maximum Cut and Sparsest Cut by constructing integrality gap examples for them. For Maximum Cut and Sparsest Cut we construct examples -- with gaps alpha[superscript{-1}] - ε (alpha is the Goemans-Williamson constant) and Omega((logloglog n)[superscript{1/13}]) respectively -- for the standard SDP relaxations augmented with O((logloglog n)[superscript{1/6}]) rounds of Sherali-Adams constraints. The construction for Sparsest Cut also implies that an n-point negative type metric may incur a distortion of Omega((logloglog n)[superscript{1/ 13}]) to embed into ell_1 even if the induced submetric on every subset of O((logloglog n)[superscript{1/6}]) points is isometric to ell_1. We also construct an integrality gap of Omega(loglog n) for the SDP relaxation for Uniform Sparsest Cut problem augmented with triangle inequalities, disproving a well known conjecture of Arora, Rao and Vazirani.
24

Contributions to statistical learning and statistical quantification in nanomaterials

Deng, Xinwei 22 June 2009 (has links)
This research focuses to develop some new techniques on statistical learning including methodology, computation and application. We also developed statistical quantification in nanomaterials. For a large number of random variables with temporal or spatial structures, we proposed shrink estimates of covariance matrix to account their Markov structures. The proposed method exploits the sparsity in the inverse covariance matrix in a systematic fashion. To deal with high dimensional data, we proposed a robust kernel principal component analysis for dimension reduction, which can extract the nonlinear structure of high dimension data more robustly. To build a prediction model more efficiently, we developed an active learning via sequential design to actively select the data points into the training set. By combining the stochastic approximation and D-optimal designs, the proposed method can build model with minimal time and effort. We also proposed factor logit-models with a large number of categories for classification. We show that the convergence rate of the classifier functions estimated from the proposed factor model does not rely on the number of categories, but only on the number of factors. It therefore can achieve better classification accuracy. For the statistical nano-quantification, a statistical approach is presented to quantify the elastic deformation of nanomaterials. We proposed a new statistical modeling technique, called sequential profile adjustment by regression (SPAR), to account for and eliminate the various experimental errors and artifacts. SPAR can automatically detect and remove the systematic errors and therefore gives more precise estimation of the elastic modulus.
25

A proposed algorithm toward uniform-distribution monotone DNF learning

Bi, Wenzhu. January 2004 (has links)
Thesis (M.S.)--Duquesne University, 2004. / Title from document title page. Abstract included in electronic submission form. Includes bibliographical references (p. 24-25) and index.
26

Mathematical Theories of Interaction with Oracles

Yang, Liu 01 October 2013 (has links)
No description available.
27

Aplicação de técnicas de inteligência artificial em processos de fabricação de vidro. / The application of the techniques of artificial intelligence in the process of glass production.

Herbert Rodrigues do Nascimento Costa 06 November 2006 (has links)
A Inteligência Artificial atualmente é um vasto campo de pesquisa. Existem diversas técnicas sendo pesquisadas, sendo que nesta tese foram utilizadas a Teoria Fuzzy, Árvores de Decisão e Redes Neurais. As três técnicas têm sido empregadas com sucesso nas mais diversas aplicações nas áreas de automação e controle, reconhecimento de padrões, reconhecimento de voz, detecção de falhas e classificação, entre outras. A Teoria Fuzzy permite trabalhar com as incertezas e provê um entendimento simbólico para compreensão do conhecimento. As Árvores de Decisão têm capacidade de construir decisões simbólicas para a classificação de problemas e, através do conhecimento obtido, pode-se construir regras simbólicas para uma tomada de decisão. A Teoria Fuzzy também pode ser incorporada às árvores de decisão, aumentando seu poder de representação e aplicabilidade. As Redes Neurais (algoritmo back-propagation) têm apresentado ótimos resultados na aprendizagem de funções e em problemas de classificação. A contribuição desta tese é mostrar a aplicação das três técnicas de Inteligência Artificial (IA) em processos de fabricação de Vidro. Os processos de fabricação do vidro foram analisados e a proposta da tese é a aplicação das técnicas de IA nas fábricas de produção de vidros para embalagens e vidros planos. Na primeira fábrica aplicam-se as técnicas de IA para classificar os defeitos que ocorrem no Vidro para Embalagens, em função das condições operacionais dos fornos de fusão. Na segunda fábrica aplicam-se as técnicas para classificar os defeitos em função das matérias primas utilizadas na produção do vidro. Na terceira fábrica as técnicas são aplicadas na classificação dos padrões de fabricação do vidro plano. Os resultados obtidos com a classificação de defeitos e padrões foram de maneira geral satisfatórios. As três técnicas de IA apresentadas foram utilizadas para a análise das bases de dados nas três fábricas de vidro estudadas nesta tese. As técnicas de IA obtiveram classificações satisfatórias para os defeitos do vidro para embalagens e para classificar os padrões dos vidros planos. Os resultados obtidos a partir das técnicas são comparados e apresentam resultados promissores. / The Artificial Intelligence now is a vast research field. There are several techniques exist being researched. In this thesis Fuzzy Theory, Decision Trees and Neural Networks were used. The three techniques have been successfully applied in several applications in the areas of automation and control, pattern recognition, voice recognition, detection of flaws and classification, among others. The Fuzzy Theory allows to work with the uncertainties and they provide a symbolic understanding for understanding of the knowledge. The Decision Trees have capacity to build symbolic decisions for the classification of problems and through the knowledge obtained by the tree could be built symbolic rules for a socket of decision. The Fuzzy Theory can also be incorporate them tree of decision increasing the representation power and applicability of the Decision trees. Neural Networks (algorithm back-propagation) it has been presenting great results in the learning of functions and in classification problems. The contribution of this thesis is to show the application of the three techniques of Artificial Intelligence (AI) in processes of production of Glass. The processes of production of the glass were analyzed and the proposal of the thesis is the application of the techniques of AI in the factories of production of glasses to packings and plane glasses. In the first factory it is applied the techniques of AI to classify the defects that happen in the Glass for Packings in function of the operational conditions of the coalition ovens. In the second factory it is applied the techniques to classify the defects in the matters cousins\' function used in the production of the glass. In the third factory the techniques are applied in the classification of the patterns of production of the plane glass. The results obtained with the classification of defects and patterns were in a satisfactory general way. The three techniques of AI presented were used for the analysis of the bases of data in the three glass factories studied in thesis. The techniques of AI obtained a satisfactory classification for the defects of the glass for packings and for the patterns of the plane glasses. The results obtained starting from the techniques are compared and they present promising results.
28

Efficient recovery algorithms with restricted access to strings

Sinha, Sandip January 2022 (has links)
We design efficient algorithms for computational problems over strings in several models where the algorithms have limited access to the input. These models, and algorithms developed respecting these constraints, are becoming increasingly relevant due to the rapidly increasing size of datasets in myriad applications. Our first problem of interest is \emph{trace reconstruction}. This is an important problem in learning theory and coding theory, and has applications in computational biology. In this problem, the goal is to recover an unknown string given independent samples (\emph{traces}) of it generated via a probabilistic noise process called the deletion channel. We give state-of-the-art algorithms for this problem in several settings. Then we consider the problem of estimating the \emph{longest increasing subsequence (LIS)} of a given string in sublinear time, given query access to the string. While the LIS of a string can be computed exactly in near-linear time, the optimal complexity of approximating the LIS length, especially when the LIS is much less than the string length, is still open. We significantly improve upon prior work in terms of both approximation and time complexity in this regime. The runtime of our algorithm essentially matches the trivial query complexity lower bound as a function of the length of the LIS. Finally, we consider the problem of local decoding, or random access, on compressed strings. The Burrows-Wheeler Transform (BWT) is an important preprocessing step in lossless text compression that rearranges a string into runs of identical characters (by exploiting context regularities), resulting in highly compressible strings. However, the decoding process of the BWT is inherently sequential, and prevents fast random access to the original string. We design a succinct data structure for locally decoding short substrings (and answering several other queries) of a given string under its compressed BWT efficiently.
29

Implementing and Evaluating Automaton Learning Algorithms for a Software Testing Platform

Khosravi Bakhtiari, Mohsen January 2015 (has links)
The Software Reliability group at KTH-CSC has designed and built a novel test platform LBTest for black-box requirements testing of reactive and embedded software systems (e.g. web servers, automobile control units, etc). The main concept of LBTest is to create a large number of test cases by incorporation of an automata learning algorithm with a model checking algorithm (NuSMV). This platform aims to support different learned automata, learning algorithms and different model checking algorithms which can be combined to implement the paradigm of learning-based testing (LBT).This thesis project investigates an existing published algorithm for learning deterministic finite automata (DFA)known as Kearns algorithm. The aimof this thesis is to investigate how effective Kearns algorithm is from a software testing perspective.Angluin’s well-known L* DFA learning algorithm has a simple structure and implementation. On the other hand, Kearnsalgorithm has more complex, difficult structure and harder implementation than L* algorithm, however it is more efficient and faster. For this reason, the plan is to implement an advanced DFA learning algorithm, Kearns algorithm[4], from a description in the literature (using Java).We consider a methodology to compare Kearns algorithm with Angluin’s DFA learning algorithm based on the master thesis of Czerny[8].The comparisonsbetween the Kearns and the L* algorithmsare based on the number of membership and equivalence queriesto investigate the difficulty of learning
30

Um método baseado em inteligência computacional para a geração automática de casos de teste de caixa preta. / A method based on computational intelligence for automatic Black Box test cases generation.

Sá, Hindenburgo Elvas Gonçalves de 09 September 2010 (has links)
Este trabalho de dissertação apresenta um método baseado em técnicas de inteligência computacional, como aprendizado de conjunto de regras, redes neurais artificiais e lógica fuzzy, para propor o desenvolvimento de ferramentas capazes de gerar e classificar casos de testes de caixa preta com as finalidades de auxiliar na atividade de preparação de testes, na detecção de defeitos em características ou funcionalidades e na diminuição do tempo de detecção de correção do software visando, com isto, atingir uma cobertura de testes qualitativamente superior ao processo criação manual. A obtenção de novos casos de testes e a classificação dos casos de testes gerados utilizam técnicas de aprendizado de um conjunto de regras, utilizando algoritmos de cobertura seqüencial, e de uma máquina de inferência fuzzy. A definição dos métodos, tanto para gerar como para classificar os casos de testes, foram fundamentados em experimentos visando comparar as similaridades entre os métodos fuzzy, redes neurais artificiais e aprendizado de conjunto de regras. Por fim, procurou-se desenvolver uma ferramenta à titulo de prova de conceitos objetivando aplicar os métodos que obtiveram melhores resultados nas experimentações. Os critérios adotados para definir os métodos foram às métricas de complexidade ciclomática e total de linhas de código (LOC). / This dissertation work presents a method based on computational intelligence techniques, such as learning set of rules, artificial neural networks and fuzzy logic, proposed the development of tools that generate test cases and sort of black box with the purposes of assisting activity in the preparation of tests for detection of defects in features or functionality and decreasing the detection time correction software aimed, with this, reach a qualitatively higher test coverage to the manual creation process. The acquisition of new test cases and classification of test cases generated using techniques Learning learning a whole set of Regrasregras using sequential covering algorithms, and a fuzzy inference machine. The definition of methods, both to generate and to classify the test cases were substantiated in experiments aimed at comparing the similarities between the fuzzy methods, neural networks and learning of the rule set. Finally, we sought to develop a tool for evidence of concepts aiming to apply the methods which obtained better results in trials. The criteria adopted to define the methods were metrics cyclomatic complexity and total lines of code (LOC).

Page generated in 0.1457 seconds