• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 4
  • 1
  • 1
  • Tagged with
  • 6
  • 6
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Computing point-to-point shortest path using an approximate distance oracle

Poudel, Pawan. January 2008 (has links)
Thesis (M.C.S.)--Miami University, Dept. of Computer Science and Systems Analysis, 2008. / Title from first page of PDF document. Includes bibliographical references (p. 33-34).
2

Program construction and evolution in a persistent integrated programming environment /

Farkas, Alex Miklós. January 1995 (has links) (PDF)
Thesis (Ph.D)---University of Adelaide, Faculty of Engineering, 1995. / Includes bibliographical references.
3

Computer oriented algorithms for synthesizing multiple output combinational and finite memory sequential circuits

Su, Yueh-hsung, January 1967 (has links)
Thesis (Ph. D.)--University of Wisconsin--Madison, 1967. / Typescript. Vita. Description based on print version record. Includes bibliographical references (leaves 130-133).
4

Traitement des images bidimensionnelles à l'aide des FPGAs /

Horé, Alain, January 2005 (has links)
Thèse (M.Eng.) -- Université du Québec à Chicoutimi, 2005. / Bibliogr.: f. 116-119. Document électronique également accessible en format PDF. CaQCU
5

Algoritmos para problemas de classificação e particionamento em grafos / Algorithms for classification and partitioning in graphs

Meira, Luis Augusto Angelotti, 1979- 13 December 2007 (has links)
Orientador: Flavio Keidi Miyazawa / Tese (doutorado) - Universidade Estadual de Campinas, Instituto de Computação / Made available in DSpace on 2018-08-11T20:54:55Z (GMT). No. of bitstreams: 1 Meira_LuisAugustoAngelotti_D.pdf: 974332 bytes, checksum: 7097ff3ed310db70e5026afabc41ceb6 (MD5) Previous issue date: 2007 / Resumo: O trabalho desenvolvido neste doutorado consistiu em conceber algoritmos para uma série de problemas NP-dificeis sob a abordagem de aproximabilidade, complementado com resultados heurísticos e também de programação inteira. O estudo foi focado em problemas de classificação e particionamento em grafos, como classificação métrica, corte balanceado e clusterização. Houve um equilíbrio entre teoria e aplicabilidade, ao obterse algoritmos com bons fatores de aproximação e algoritmos que obtiveram soluções de qualidade em tempo competitivo. O estudo concentrou-se em três problemas: o Problema da Classificação Métrica Uniforme, o Problema do Corte Balanceado e o Problema da Localização de Recursos na versão contínua. Inicialmente trabalhamos no Problema da Classificação Métrica Uniforme, para o qual propusemos um algoritmo O (logn)-aproximado. Na validação experimental, este algoritmo obteve soluções de boa qualidade em um espaço de tempo menor que os algoritmos tradicionais. Para o Problema do Corte Balanceado, propusemos heurísticas e um algoritmo exato. Experimentalmente, utilizamos um resolvedor de programação semidefinida para resolver a relaxação do problema e melhoramos substancialmente o tempo de resolução da relaxação ao construir um resolvedor próprio utilizando o método de inserção de cortes sobre um sistema de programação linear. Finalmente, trabalhamos com o problema de Localização de Recursos na variante contínua. Para este problema, apresentamos algoritmos de aproximação para as métricas l2 e l2 2. Este algoritmo foi aplicado para obter algoritmos de aproximação para o problema k-Means, que 'e um problema clássico de clusterização. Na comparação ao experimental com uma implementação conhecida da literatura, os algoritmos apresentados mostraram-se competitivos, obtendo, em vários casos, soluções de melhor qualidade em tempo equiparável. Os estudos relativos a estes problemas resultaram em três artigos, detalhados nos capítulos que compõem esta tese / Abstract: We present algorithms for combinatorial optimization NP-hard problems on classification and graph partitioning. The thesis concerns about theory and application and is guided by an approximation algorithms approach, complemented with heuristics and integer programming. We proposed good approximation factor algorithms as well as algorithms that find quality solutions in competitive time. We focus on three problems: the Metric Labeling Problem, the Sparsest Cut Problem and the Continuous Facility Location Problem. For the Metric Labeling Problem, we proposed an O(log n)-approximation algorithm. In the experimental analysis, this algorithm found high quality solutions in less time than other known algorithms. For the Sparsest Cut Problem we proposed heuristics and an exact algorithm. We built an SDP Solver to the relaxed formulation using a semi-infinity cut generation over linear programming. This approach considerably reduces the time used to solve the semi definite relaxation compared to an open source semi definite programming solver. Finally, for the Continuous Facility Location Problem we present approximation algorithms to the l2 and l2 2 distance function. These algorithms are used to obtain approximation algorithms to the k-Means Problem, which is a basic clustering problem. The presented algorithms are competitive since they obtain in many cases better solutions in equivalent time, compared to other known algorithms. The study of these problems results in three papers, which are detailed in chapters that make this thesis / Doutorado / Otimização Combinatoria / Doutor em Ciência da Computação
6

Algorithmes basés sur la programmation DC et DCA pour l’apprentissage avec la parcimonie et l’apprentissage stochastique en grande dimension / DCA based algorithms for learning with sparsity in high dimensional setting and stochastical learning

Phan, Duy Nhat 15 December 2016 (has links)
De nos jours, avec l'abondance croissante de données de très grande taille, les problèmes de classification de grande dimension ont été mis en évidence comme un challenge dans la communauté d'apprentissage automatique et ont beaucoup attiré l'attention des chercheurs dans le domaine. Au cours des dernières années, les techniques d'apprentissage avec la parcimonie et l'optimisation stochastique se sont prouvées être efficaces pour ce type de problèmes. Dans cette thèse, nous nous concentrons sur le développement des méthodes d'optimisation pour résoudre certaines classes de problèmes concernant ces deux sujets. Nos méthodes sont basées sur la programmation DC (Difference of Convex functions) et DCA (DC Algorithm) étant reconnues comme des outils puissants d'optimisation non convexe. La thèse est composée de trois parties. La première partie aborde le problème de la sélection des variables. La deuxième partie étudie le problème de la sélection de groupes de variables. La dernière partie de la thèse liée à l'apprentissage stochastique. Dans la première partie, nous commençons par la sélection des variables dans le problème discriminant de Fisher (Chapitre 2) et le problème de scoring optimal (Chapitre 3), qui sont les deux approches différentes pour la classification supervisée dans l'espace de grande dimension, dans lequel le nombre de variables est beaucoup plus grand que le nombre d'observations. Poursuivant cette étude, nous étudions la structure du problème d'estimation de matrice de covariance parcimonieuse et fournissons les quatre algorithmes appropriés basés sur la programmation DC et DCA (Chapitre 4). Deux applications en finance et en classification sont étudiées pour illustrer l'efficacité de nos méthodes. La deuxième partie étudie la L_p,0régularisation pour la sélection de groupes de variables (Chapitre 5). En utilisant une approximation DC de la L_p,0norme, nous prouvons que le problème approché, avec des paramètres appropriés, est équivalent au problème original. Considérant deux reformulations équivalentes du problème approché, nous développons différents algorithmes basés sur la programmation DC et DCA pour les résoudre. Comme applications, nous mettons en pratique nos méthodes pour la sélection de groupes de variables dans les problèmes de scoring optimal et d'estimation de multiples matrices de covariance. Dans la troisième partie de la thèse, nous introduisons un DCA stochastique pour des problèmes d'estimation des paramètres à grande échelle (Chapitre 6) dans lesquelles la fonction objectif est la somme d'une grande famille des fonctions non convexes. Comme une étude de cas, nous proposons un schéma DCA stochastique spécial pour le modèle loglinéaire incorporant des variables latentes / These days with the increasing abundance of data with high dimensionality, high dimensional classification problems have been highlighted as a challenge in machine learning community and have attracted a great deal of attention from researchers in the field. In recent years, sparse and stochastic learning techniques have been proven to be useful for this kind of problem. In this thesis, we focus on developing optimization approaches for solving some classes of optimization problems in these two topics. Our methods are based on DC (Difference of Convex functions) programming and DCA (DC Algorithms) which are wellknown as one of the most powerful tools in optimization. The thesis is composed of three parts. The first part tackles the issue of variable selection. The second part studies the problem of group variable selection. The final part of the thesis concerns the stochastic learning. In the first part, we start with the variable selection in the Fisher's discriminant problem (Chapter 2) and the optimal scoring problem (Chapter 3), which are two different approaches for the supervised classification in the high dimensional setting, in which the number of features is much larger than the number of observations. Continuing this study, we study the structure of the sparse covariance matrix estimation problem and propose four appropriate DCA based algorithms (Chapter 4). Two applications in finance and classification are conducted to illustrate the efficiency of our methods. The second part studies the L_p,0regularization for the group variable selection (Chapter 5). Using a DC approximation of the L_p,0norm, we indicate that the approximate problem is equivalent to the original problem with suitable parameters. Considering two equivalent reformulations of the approximate problem we develop DCA based algorithms to solve them. Regarding applications, we implement the proposed algorithms for group feature selection in optimal scoring problem and estimation problem of multiple covariance matrices. In the third part of the thesis, we introduce a stochastic DCA for large scale parameter estimation problems (Chapter 6) in which the objective function is a large sum of nonconvex components. As an application, we propose a special stochastic DCA for the loglinear model incorporating latent variables

Page generated in 0.0827 seconds