• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 588
  • 44
  • 39
  • 37
  • 9
  • 5
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 2
  • 1
  • 1
  • Tagged with
  • 768
  • 768
  • 185
  • 174
  • 156
  • 135
  • 119
  • 82
  • 71
  • 66
  • 63
  • 63
  • 59
  • 55
  • 48
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
741

Otimização por nuvem de partículas aplicada ao problema de atribuição de tarefas dinâmico

Pierobom, Jean Lima 13 February 2012 (has links)
A Inteligência de Enxame (Swarm Intelligence) é uma área de estudos que busca soluções para problemas de otimização utilizando-se de técnicas computacionais inspiradas no comportamento social emergente encontrado na biologia. A metaheurística Particle Swarm Optimization (PSO) é relativamente nova e foi inspirada no comportamento social de bandos de pássaros. PSO tem apresentado bons resultados em alguns trabalhos recentes de otimização discreta, apesar de ter sido concebido originalmente para a otimização de problemas contínuos. Este trabalho trata o Problema de Atribuição de Tarefas - Task Assignment Problem (TAP), e apresenta uma aplicação: o problema de alocação de táxis e clientes, cujo objetivo da otimização está em minimizar a distância percorrida pela frota. Primeiramente, o problema é resolvido em um cenário estático, com duas versões do PSO discreto: a primeira abordagem é baseada em codificação binária e a segunda utiliza permutações para codificar as soluções. Os resultados obtidos mostram que a segunda abordagem é superior à primeira em termos de qualidade das soluções e tempo computacional, e é capaz de encontrar as soluções ótimas para o problema nas instâncias para as quais os valores ótimos são conhecidos. A partir disto, o algoritmo é adaptado para a otimização do problema em um ambiente dinâmico, com a aplicação de diferentes estratégias de resposta às mudanças. Os novos resultados mostram que a combinação de algumas abordagens habilita o algoritmo PSO a obter boas soluções ao longo da ocorrência de mudanças nas variáveis de decisão problema, em todas as instâncias testadas, com diferentes tamanhos e escalas de mudança. / Swarm Intelligence searches for solutions to optimization problems using computational techniques inspired in the emerging social behavior found in biology. The metaheuristic Particle Swarm Optimization (PSO) is relatively new and can be considered a metaphor of bird flocks. PSO has shown good results in some recent works of discrete optimization, despite it has been originally designed for continuous optimization problems. This paper deals with the Task Assignment Problem (TAP), and presents an application: the optimization problem of allocation of taxis and customers, whose goal is to minimize the distance traveled by the fleet. The problem is solved in a static scenario with two versions of the discrete PSO: the first approach that is based on a binary codification and the second one which uses permutations to encode the solution. The obtained results show that the second approach is superior than the first one in terms of quality of the solutions and computational time, and it is capable of achieving the known optimal values in the tested instances of the problem. From this, the algorithm is adapted for the optimization of the problem in a dynamic environment, with the application of different strategies to respond to changes. The new results show that some combination of approaches enables the PSO algorithm to achieve good solutions along the occurrence of changes in decision variables problem, in all instances tested, with different sizes and scales of change.
742

New algorithms and data structures for the emptiness problem of alternating automata / Nouveaux algorithmes et structures de données pour le problème du vide des automates alternants

Maquet, Nicolas 03 March 2011 (has links)
This work studies new algorithms and data structures that are useful in the context of program verification. As computers have become more and more ubiquitous in our modern societies, an increasingly large number of computer-based systems are considered safety-critical. Such systems are characterized by the fact that a failure or a bug (computer error in the computing jargon) could potentially cause large damage, whether in loss of life, environmental damage, or economic damage. For safety-critical systems, the industrial software engineering community increasingly calls for using techniques which provide some formal assurance that a certain piece of software is correct.<p>One of the most successful program verification techniques is model checking, in which programs are typically abstracted by a finite-state machine. After this abstraction step, properties (typically in the form of some temporal logic formula) can be checked against the finite-state abstraction, with the help of automated tools. Alternating automata play an important role in this context, since many temporal logics on words and trees can be efficiently translated into those automata. This property allows for the reduction of model checking to automata-theoretic questions and is called the automata-theoretic approach to model checking. In this work, we provide three novel approaches for the analysis (emptiness checking) of alternating automata over finite and infinite words. First, we build on the successful framework of antichains to devise new algorithms for LTL satisfiability and model checking, using alternating automata. These algorithms combine antichains with reduced ordered binary decision diagrams in order to handle the exponentially large alphabets of the automata generated by the LTL translation. Second, we develop new abstraction and refinement algorithms for alternating automata, which combine the use of antichains with abstract interpretation, in order to handle ever larger instances of alternating automata. Finally, we define a new symbolic data structure, coined lattice-valued binary decision diagrams that is particularly well-suited for the encoding of transition functions of alternating automata over symbolic alphabets. All of these works are supported with empirical evaluations that confirm the practical usefulness of our approaches. / Ce travail traite de l'étude de nouveaux algorithmes et structures de données dont l'usage est destiné à la vérification de programmes. Les ordinateurs sont de plus en plus présents dans notre vie quotidienne et, de plus en plus souvent, ils se voient confiés des tâches de nature critique pour la sécurité. Ces systèmes sont caractérisés par le fait qu'une panne ou un bug (erreur en jargon informatique) peut avoir des effets potentiellement désastreux, que ce soit en pertes humaines, dégâts environnementaux, ou économiques. Pour ces systèmes critiques, les concepteurs de systèmes industriels prônent de plus en plus l'usage de techniques permettant d'obtenir une assurance formelle de correction.<p><p>Une des techniques de vérification de programmes les plus utilisées est le model checking, avec laquelle les programmes sont typiquement abstraits par une machine a états finis. Après cette phase d'abstraction, des propriétés (typiquement sous la forme d'une formule de logique temporelle) peuvent êtres vérifiées sur l'abstraction à espace d'états fini, à l'aide d'outils de vérification automatisés. Les automates alternants jouent un rôle important dans ce contexte, principalement parce que plusieurs logiques temporelle peuvent êtres traduites efficacement vers ces automates. Cette caractéristique des automates alternants permet de réduire le model checking des logiques temporelles à des questions sur les automates, ce qui est appelé l'approche par automates du model checking. Dans ce travail, nous étudions trois nouvelles approches pour l'analyse (le test du vide) desautomates alternants sur mots finis et infinis. Premièrement, nous appliquons l'approche par antichaînes (utilisée précédemment avec succès pour l'analyse d'automates) pour obtenir de nouveaux algorithmes pour les problèmes de satisfaisabilité et du model checking de la logique temporelle linéaire, via les automates alternants.Ces algorithmes combinent l'approche par antichaînes avec l'usage des ROBDD, dans le but de gérer efficacement la combinatoire induite par la taille exponentielle des alphabets d'automates générés à partir de LTL. Deuxièmement, nous développons de nouveaux algorithmes d'abstraction et raffinement pour les automates alternants, combinant l'usage des antichaînes et de l'interprétation abstraite, dans le but de pouvoir traiter efficacement des automates de grande taille. Enfin, nous définissons une nouvelle structure de données, appelée LVBDD (Lattice-Valued Binary Decision Diagrams), qui permet un encodage efficace des fonctions de transition des automates alternants sur alphabets symboliques. Tous ces travaux ont fait l'objet d'implémentations et ont été validés expérimentalement. / Doctorat en Sciences / info:eu-repo/semantics/nonPublished
743

L'heuristique de la Gestalt: une méta-modélisation dynamique en ligne comme assistance du processus d'une métaheuristique / Gestalt heuristic: dynamic and online meta-modeling as improving method of metaheuristic process

Philemotte, Christophe 09 June 2009 (has links)
<p>De nos jours, il est peu de processus ou de tâches qui ne requièrent pas l'optimisation d'une quantité :diminuer le temps de livraison, diminuer l'espace utilisé, réduire les efforts de développement, C'est donc sans surprise que la recherche en optimisation soit l'un des domaines les plus actifs des sciences des technologies de l'information. En optimisation combinatoire, les métaheuristiques sont à compter parmi le fleuron des techniques algorithmiques. Mais ce succès est encore au prix d'une quantité significative de temps de conception et développement. Ne serait-il pas possible d'aller encore plus loin ?D'automatiser la préparation des métaheuristiques ?En particulier dans des conditions telles le manque de temps, l'ignorance de techniques spécialisées ou encore la mauvaise compréhension du problème traité ?C'est ce à quoi nous répondons dans la présente thèse au moyen d'une approche de méta-modélisation de la recherche :l'heuristique de la Gestalt.</p><p><p><p>Considérant la représentation du problème comme un levier que l'on peut activer sous le processus de recherche mené par une métaheuristique, la thèse suggère la construction d'une abstraction de cette représentation capable d'assister la métaheuristique à trouver de bonnes solutions en contraignant sa recherche. Cette approche, inspirée de la psychologie de la Gestalt, nous l'appelons l'heuristique de la Gestalt. Son fonctionnement repose principalement sur l'agrégation des variables de la représentation. Cette agrégation donne lieu à une abstraction structurelle, mais également fonctionnelle en ce sens que les opérateurs de la métaheuristique doivent désormais respecter l'intégrité des agrégats définis.</p><p><p><p>Après avoir établi le contexte de la dissertation, nous discutons de la transposition de la psychologie de la Gestalt dans le cadre de l'optimisation combinatoire et des métaheuristiques. S'ensuit la formalisation de l'heuristique de la Gestalt et la description de sa réalisation. Finalement, une série d'études expérimentales sont menées pour éprouver le concept avancé et valider l'implémentation basée sur les algorithmes évolutionnistes que nous proposons. En conclusion, nous affirmons que l'implémentation de l'heuristique de la Gestalt basée, entre autres, sur un algorithme génétique de groupement est capable d'assister positivement des algorithmes génétiques lorsque les instances de problèmes traitées possèdent une structure riche et complexe, que leur taille est importante, que l'on est tôt dans le processus d'optimisation et que l'algorithme génétique n'est pas paramétré spécifiquement.</p> / Doctorat en Sciences de l'ingénieur / info:eu-repo/semantics/nonPublished
744

Approximation algorithms for covering problems in dense graphs

Levy, Eythan 06 March 2009 (has links)
We present a set of approximation results for several covering problems in dense graphs. These results show that for several problems, classical algorithms with constant approximation ratios can be analyzed in a finer way, and provide better constant approximation ratios under some density constraints. In particular, we show that the maximal matching heuristic approximates VERTEX COVER (VC) and MINIMUM MAXIMAL MATCHING (MMM) with a constant ratio strictly smaller than 2 when the proportion of edges present in the graph (weak density) is at least 3/4, or when the normalized minimum degree (strong density) is at least 1/2. We also show that this result can be improved by a greedy algorithm which provides a constant ratio smaller than 2 when the weak density is at least 1/2. We also provide tight families of graphs for all these approximation ratios. We then looked at several algorithms from the literature for VC and SET COVER (SC). We present a unified and critical approach to the Karpinski/Zelikovsky, Imamura/Iwama and Bar-Yehuda/Kehat algorithms, identifying the general the general scheme underlying these algorithms.<p>Finally, we look at the CONNECTED VERTEX COVER (CVC) problem,for which we proposed new approximation results in dense graphs. We first analyze Carla Savage's algorithm, then a new variant of the Karpinski-Zelikovsky algorithm. Our results show that these algorithms provide the same approximation ratios for CVC as the maximal matching heuristic and the Karpinski-Zelikovsky algorithm did for VC. We provide tight examples for the ratios guaranteed by both algorithms. We also introduce a new invariant, the "price of connectivity of VC", defined as the ratio between the optimal solutions of CVC and VC, and showed a nearly tight upper bound on its value as a function of the weak density. Our last chapter discusses software aspects, and presents the use of the GRAPHEDRON software in the framework of approximation algorithms, as well as our contributions to the development of this system.<p><p>/<p><p>Nous présentons un ensemble de résultats d'approximation pour plusieurs problèmes de couverture dans les graphes denses. Ces résultats montrent que pour plusieurs problèmes, des algorithmes classiques à facteur d'approximation constant peuvent être analysés de manière plus fine, et garantissent de meilleurs facteurs d'aproximation constants sous certaines contraintes de densité. Nous montrons en particulier que l'heuristique du matching maximal approxime les problèmes VERTEX COVER (VC) et MINIMUM MAXIMAL MATCHING (MMM) avec un facteur constant inférieur à 2 quand la proportion d'arêtes présentes dans le graphe (densité faible) est supérieure à 3/4 ou quand le degré minimum normalisé (densité forte) est supérieur à 1/2. Nous montrons également que ce résultat peut être amélioré par un algorithme de type GREEDY, qui fournit un facteur constant inférieur à 2 pour des densités faibles supérieures à 1/2. Nous donnons également des familles de graphes extrémaux pour nos facteurs d'approximation. Nous nous somme ensuite intéressés à plusieurs algorithmes de la littérature pour les problèmes VC et SET COVER (SC). Nous avons présenté une approche unifiée et critique des algorithmes de Karpinski-Zelikovsky, Imamura-Iwama, et Bar-Yehuda-Kehat, identifiant un schéma général dans lequel s'intègrent ces algorithmes.<p>Nous nous sommes finalement intéressés au problème CONNECTED VERTEX COVER (CVC), pour lequel nous avons proposé de nouveaux résultats d'approximation dans les graphes denses, au travers de l'algorithme de Carla Savage d'une part, et d'une nouvelle variante de l'algorithme de Karpinski-Zelikovsky d'autre part. Ces résultats montrent que nous pouvons obtenir pour CVC les mêmes facteurs d'approximation que ceux obtenus pour VC à l'aide de l'heuristique du matching maximal et de l'algorithme de Karpinski-Zelikovsky. Nous montrons également des familles de graphes extrémaux pour les ratios garantis par ces deux algorithmes. Nous avons également étudié un nouvel invariant, le coût de connectivité de VC, défini comme le rapport entre les solutions optimales de CVC et de VC, et montré une borne supérieure sur sa valeur en fonction de la densité faible. Notre dernier chapitre discute d'aspects logiciels, et présente l'utilisation du logiciel GRAPHEDRON dans le cadre des algorithmes d'approximation, ainsi que nos contributions au développement du logiciel. / Doctorat en Sciences / info:eu-repo/semantics/nonPublished
745

Ferramentas interativas de auxílio a diagnóstico por neuro-imagens 3D / Interactive tools for volumetric neuroimage based diagnosis

Yauri Vidalón, José Elías 22 August 2018 (has links)
Orientador: Wu Shin-Ting / Dissertação (mestrado) - Universidade Estadual de Campinas, Faculdade de Engenharia Elétrica e de Computação / Made available in DSpace on 2018-08-22T07:56:06Z (GMT). No. of bitstreams: 1 YauriVidalon_JoseElias_M.pdf: 9888204 bytes, checksum: 2eb49026222826c846f3aeb94929c1c6 (MD5) Previous issue date: 2012 / Resumo: Por apresentar alta resolução espacial e espectral, é crescente o uso de imagens de ressonância magnética tanto no estudo dos órgãos humanos como também no diagnóstico das anormalidades estruturais e funcionais e no planejamento e treinamento cirúrgico. Junto com a rápida evolução dos algoritmos de processamento de imagens médicas, surgiram na última década aplicativos de diagnósticos assistidos por computador especializados em mamografia, angiografia e imagens da região torácica. A complexidade estrutural do cérebro e as diferenças anatômicas individuais do crânio constituem, no entanto, ainda desafios ao desenvolvimento de um sistema de diagnóstico especializado em neuro-imagens. A intervenção de especialistas é muitas vezes imprescindível na identificação e na interpretação dos achados radiológicos. Nesta dissertação, propomos o uso de três técnicas para auxiliar os especialistas da área médica na busca por achados radiológicos sutis de forma interativa. São apresentados dois objetos de interação, lente móvel e sonda volumétrica, que permitem atualizar continuamente os dados em foco enquanto são manipulados. Com isso, é possível investigar regiões cerebrais de interesse preservando o seu contexto. E, a fim de facilitar a percepção visual das variações funcionais ou estruturais sutis, propomos utilizar um editor de funções de transferência 1D para realçar ou aumentar o contraste entre os voxels adjacentes. As ferramentas foram avaliadas por um grupo de especialistas em neuro-imagens do Laboratório de Neuro-imagens da Faculdade de Ciências Médicas da Unicamp / Abstract: Because of its high spatial and spectral resolution, it is increasing the use of magnetic resonance images both in the study of human organs as well as in the diagnosis of structural and functional abnormalities and in the surgery planning and training. Along with the rapid evolution of medical image processing algorithms, computer-aided diagnostics systems specialized in mammography, angiography, and computed tomography and magnetic resonance of the thorax have emerged in the last decade. The structural complexity of the brain and individual anatomical shape of skulls are, however, challenges in developing a diagnostic system specializing in neuro-imaging. Expert interventions are still essential both in the identification and in the interpretation of radiological findings. In this dissertation, we propose the use of three techniques to aid the medical experts in the search of subtle findings in an interactive way. We present two widgets, movable lens and volumetric probe, that allow one to update continuously the volume data in focus while are manipulated. In this way, it is possible to investigate brain regions of interest preserving its context. And, in order to facilitate the visual perception of the subtle functional or structural changes, we propose to use an editor of 1D transfer function to enhance or to increase the contrast between adjacent voxels. The tools were assessed by the neuro-imaging experts of the Laboratory of Neuro-Images of the Faculty of Medical Sciences of Unicamp / Mestrado / Engenharia de Computação / Mestre em Engenharia Elétrica
746

Automated image classification via unsupervised feature learning by K-means

Karimy Dehkordy, Hossein 09 July 2015 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / Research on image classification has grown rapidly in the field of machine learning. Many methods have already been implemented for image classification. Among all these methods, best results have been reported by neural network-based techniques. One of the most important steps in automated image classification is feature extraction. Feature extraction includes two parts: feature construction and feature selection. Many methods for feature extraction exist, but the best ones are related to deep-learning approaches such as network-in-network or deep convolutional network algorithms. Deep learning tries to focus on the level of abstraction and find higher levels of abstraction from the previous level by having multiple layers of hidden layers. The two main problems with using deep-learning approaches are the speed and the number of parameters that should be configured. Small changes or poor selection of parameters can alter the results completely or even make them worse. Tuning these parameters is usually impossible for normal users who do not have super computers because one should run the algorithm and try to tune the parameters according to the results obtained. Thus, this process can be very time consuming. This thesis attempts to address the speed and configuration issues found with traditional deep-network approaches. Some of the traditional methods of unsupervised learning are used to build an automated image-classification approach that takes less time both to configure and to run.
747

Machine Vision Assisted In Situ Ichthyoplankton Imaging System

Iyer, Neeraj 12 July 2013 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / Recently there has been a lot of effort in developing systems for sampling and automatically classifying plankton from the oceans. Existing methods assume the specimens have already been precisely segmented, or aim at analyzing images containing single specimen (extraction of their features and/or recognition of specimens as single targets in-focus in small images). The resolution in the existing systems is limiting. Our goal is to develop automated, very high resolution image sensing of critically important, yet under-sampled, components of the planktonic community by addressing both the physical sensing system (e.g. camera, lighting, depth of field), as well as crucial image extraction and recognition routines. The objective of this thesis is to develop a framework that aims at (i) the detection and segmentation of all organisms of interest automatically, directly from the raw data, while filtering out the noise and out-of-focus instances, (ii) extract the best features from images and (iii) identify and classify the plankton species. Our approach focusses on utilizing the full computational power of a multicore system by implementing a parallel programming approach that can process large volumes of high resolution plankton images obtained from our newly designed imaging system (In Situ Ichthyoplankton Imaging System (ISIIS)). We compare some of the widely used segmentation methods with emphasis on accuracy and speed to find the one that works best on our data. We design a robust, scalable, fully automated system for high-throughput processing of the ISIIS imagery.
748

Case Studies to Learn Human Mapping Strategies in a Variety of Coarse-Grained Reconfigurable Architectures

Malla, Tika K. 05 1900 (has links)
Computer hardware and algorithm design have seen significant progress over the years. It is also seen that there are several domains in which humans are more efficient than computers. For example in image recognition, image tagging, natural language understanding and processing, humans often find complicated algorithms quite easy to grasp. This thesis presents the different case studies to learn human mapping strategy to solve the mapping problem in the area of coarse-grained reconfigurable architectures (CGRAs). To achieve optimum level performance and consume less energy in CGRAs, place and route problem has always been a major concern. Making use of human characteristics can be helpful in problems as such, through pattern recognition and experience. Therefore to conduct the case studies a computer mapping game called UNTANGLED was analyzed as a medium to convey insights of human mapping strategies in a variety of architectures. The purpose of this research was to learn from humans so that we can come up with better algorithms to outperform the existing algorithms. We observed how human strategies vary as we present them with different architectures, different architectures with constraints, different visualization as well as how the quality of solution changes with experience. In this work all the case studies obtained from exploiting human strategies provide useful feedback that can improve upon existing algorithms. These insights can be adapted to find the best architectural solution for a particular domain and for future research directions for mapping onto mesh-and- stripe based CGRAs.
749

An Empirical Study of Software Debugging Games with Introductory Students

Reynolds, Lisa Marie 08 1900 (has links)
Bug Fixer is a web-based application that complements lectures with hands-on exercises that encourage students to think about the logic in programs. Bug Fixer presents students with code that has several bugs that they must fix. The process of fixing the bugs forces students to conceptually think about the code and reinforces their understanding of the logic behind algorithms. In this work, we conducted a study using Bug Fixer with undergraduate students in the CSCE1040 course at University of North Texas to evaluate whether the system increases their conceptual understanding of the algorithms and improves their Software Testing skills. Students participated in weekly activities to fix bugs in code. Most students enjoyed Bug Fixer and recommend the system for future use. Students typically reported a better understanding of the algorithms used in class. We observed a slight increase of passing grades for students who participated in our study compared to students in other sections of the course with the same instructor who did not participate in our study. The students who did not report a positive experience provide comments for future improvements that we plan to address in future work.
750

Modal satisifiability in a constraint logic environment

Stevenson, Lynette 30 November 2007 (has links)
The modal satisfiability problem has to date been solved using either a specifically designed algorithm, or by translating the modal logic formula into a different class of problem, such as a first-order logic, a propositional satisfiability problem or a constraint satisfaction problem. These approaches and the solvers developed to support them are surveyed and a synthesis thereof is presented. The translation of a modal K formula into a constraint satisfaction problem, as developed by Brand et al. [18], is further enhanced. The modal formula, which must be in conjunctive normal form, is translated into layered propositional formulae. Each of these layers is translated into a constraint satisfaction problem and solved using the constraint solver ECLiPSe. I extend this translation to deal with reflexive and transitive accessibility relations, thereby providing for the modal logics KT and S4. Two of the difficulties that arise when these accessibility relations are added are that the resultant formula increases considerably in complexity, and that it is no longer in conjunctive normal form (CNF). I eliminate the need for the conversion of the formula to CNF and deal instead with formulae that are in negation normal form (NNF). I apply a number of enhancements to the formula at each modal layer before it is translated into a constraint satisfaction problem. These include extensive simplification, the assignment of a single value to propositional variables that occur only positively or only negatively, and caching the status of the formula at each node of the search tree. All of these significantly prune the search space. The final results I achieve compare favorably with those obtained by other solvers. / Computing / M.Sc. (Computer Science)

Page generated in 0.1159 seconds