• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1303
  • 700
  • 234
  • 111
  • 97
  • 43
  • 36
  • 18
  • 16
  • 15
  • 15
  • 14
  • 11
  • 10
  • 10
  • Tagged with
  • 3138
  • 581
  • 547
  • 366
  • 355
  • 298
  • 295
  • 293
  • 237
  • 220
  • 213
  • 208
  • 191
  • 186
  • 178
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
771

A influência dos padrões para sistemas de gestão no desempenho das empresas / The influence of management systems standards on the performance of firms

Giovannini, Fabrizio 18 November 2008 (has links)
As empresas precisam de sistemas de regras, formais ou não, para serem administradas. Os sistemas de regras para a administração são aqui chamados de sistemas de gestão. Entre as funções do administrador está, portanto, a construção de sistemas de gestão que atendam às necessidades dos stakeholders da empresa e que assegurem a sobrevivência e a prosperidade da mesma. O problema pode ser dividido em duas partes: que tipo de sistema é o melhor para a organização e qual a melhor forma de implantá-lo. No corpo do conhecimento da administração há inumeráveis soluções propostas para esses problemas, embasadas em diferentes abordagens e teorias, e dirigidas a diferentes tipos de empresas e contextos. Entre os esforços para a busca de soluções cada vez melhores ganhou ímpeto, nos últimos 15 anos, um movimento de criação de Padrões para Sistemas de Gestão (PSGs). O padrão que se tornou mais conhecido é a norma ISO 9.000, voltado à criação de sistemas de gestão da qualidade. Depois desse padrão pioneiro, outros foram e estão sendo criados com o objetivo de abranger a maioria das áreas dos sistemas de gestão: meio ambiente, saúde e segurança do trabalho, responsabilidade social, governança, demonstrações financeiras e contabilidade, entre outros. Apesar de um número crescente de organizações estar optando por usar estes padrões para criar seus sistemas de gestão, há dúvidas e controvérsias sobre sua eficiência e eficácia. Esta tese investiga a influência dos PSGs sobre o desempenho das empresas. Através de uma pesquisa quantitativa compara o desempenho, ao longo de 4 anos, de empresas com diferentes níveis de adoção de PSGs. A pesquisa foi feita com todas as indústrias de autopeças brasileiras permitindo, dessa forma, certo controle das variáveis intervenientes. A qualidade dos dados contou com a inestimável colaboração da empresa de informações comerciais Serasa S.A. que forneceu os dados de desempenho financeiro e de outras variáveis intervenientes. Utilizando-se técnicas estatísticas uni e multivariadas, entre elas a Modelagem de Equações Estruturantes, rejeitou-se a hipótese nula de que não há influência dos PSGs sobre o desempenho das empresas. Esse resultado, em conjunto com outras evidências estatísticas, permite afirmar dentro dos limites do modelo conceitual da pesquisa que a adoção de PSGs influencia positivamente o desempenho das indústrias de autopeças brasileiras. Há também indícios de que empresas pequenas e médias se beneficiam mais da adoção de PSGs do que as grandes empresas. Portanto, Padrões para Sistemas de Gestão podem ser recomendados como uma ferramenta potencialmente eficaz, especialmente para empresas industriais. / Firms, to be managed, need systems of rules, formal or not. The systems of rules for business management are called here management systems. Among the functions of the manager there is, therefore, the construction of management systems that fulfill the firms stakeholders necessities and that secure the survival and prosperity of the firm. The problem can be divided into two parts: what kind of system is the best for the organization and which is the best way to implement it. In the administrations body of knowledge there are innumerable proposed solutions to these problems, based in different approaches and theories, and directed to different kinds of firms and contexts. Among the efforts for the search of better solutions it is getting stronger, in the last 15 years, a movement for the creation of Management Systems Standards. The Standard that became mostly known is the ISO 9.000 norm, dedicated to the creation of quality management systems. After this pioneer Standard, other where and are being created with the goal of enclosing the majority of the areas where management systems are applied: environment, health and safety, social responsibility, governance, finance and accounting, among others. Although a growing number of organizations is opting to use these Standards to create their own management systems, there are doubts and controversies regarding its efficiency and effectiveness. This thesis investigates the influence of Management Systems Standards over the firms performance. Trough a quantitative research it compares the performance, in a 4 years period, of firms with different levels of Management Systems Standard. The research was done with all the auto parts manufacturers of Brazil allowing, in this manner, a measure of control over the intervenient variables. The quality of the data is due to the inestimable collaboration of the commercial information firm Serasa S.A. that supplied the data of financial performance and other intervenient variables. Using uni and multivariate statistical techniques, the null hypothesis that there is no influence of Management Systems Standards over firms performance was rejected. This result, joined with other statistical evidence, allows affirming within the limits of the conceptual framework that the adoption of Management Systems Standards positively influences the performance of the Brazilian auto parts manufacturers. There are also clues that small and medium companies benefit more from the adoption of Management System Standards than large firms. Therefore, Management Systems Standards can be recommended as a potentially effective tool, especially for manufacturing firms.
772

Aprendizagem de uma habilidade motora com diferentes níveis de complexidade em sujeitos pós acidente vascular cerebral / Learning a motor skill task with different levels of complexity in post stroke subjects

Palma, Gisele Carla dos Santos 17 November 2016 (has links)
Introdução: Ainda não é conhecido na literatura se os sujeitos pós-AVC têm a capacidade de aprender ações complexas. Os estudos existentes apresentam falhas metodológicas, assim como na implementação da complexidade da tarefa. Este trabalho investigou os efeitos da manipulação da complexidade da tarefa na aprendizagem motora desta população. Método: Vinte e quatro sujeitos pós-AVC (Grupo experimental - GE) e vinte e quatro sujeitos saudáveis (Grupo controle - GC) foram selecionados e divididos em dois experimentos: baixa complexidade e alta complexidade. Foram desenvolvidas duas tarefas denominadas de baixa e de alta complexidade, com variação não só no número de elementos, mas também na carga de processamento exigida para a execução da mesma, as quais deram origem ao experimento 1 e 2, respectivamente. Esta tarefa era executada em ambiente de realidade virtual, a partir do deslocamento do centro de pressão para controle dos objetos móveis na tela. O delineamento foi constituído de 150 tentativas para a fase de aquisição, dividida em 3 dias de prática. Após 4 dias sem prática foi realizado o teste de retenção (RET) e transferência (TR). A alteração realizada no teste de TR foi na direção do deslocamento dos objetos. As variáveis dependentes de ambos experimentos foram: pontuação e tempo de execução. Para cada variável dependente foi conduzido uma Anova two-way com medidas repetidas (2 grupos x 4 momentos) seguido de post hoc de Tukey para identificar os momentos das mudanças. Resultados: No experimento 1, para a variável pontuação ambos os grupos melhoraram o desempenho ao longo da aquisição e o mantiveram no teste de retenção. Não houve diferença entre o GC e GE. Para a variável tempo de execução, o GC diferenciou-se do GE em todos os momentos (Início e final da aquisição, teste de retenção e transferência), apresentando pior desempenho. No experimento 2, tanto para a variável pontuação, quanto para tempo de execução o GE diferenciou-se do GC no final da aquisição e no teste de RET. Conclusão: Os sujeitos pós-AVC foram capazes de aprender a tarefa de baixa complexidade, mas esta aprendizagem não foi passível de generalização. Quanto a tarefa de alta complexidade, os sujeitos do GE não aprenderam a tarefa, sendo muito impactados pelo aumento de complexidade. Possivelmente, os déficits cognitivos relacionados ao planejamento e sequenciamento de ações impactaram na aquisição da habilidade de alta complexidade e não prejudicaram a aprendizagem da tarefa de baixa complexidade / Introduction: It is not yet known in the literature wether post stroke subjects have the ability to learn complex actions. The studies have methodological flaws as well as the implementation of the complexity of the task. This study aimed to investigate the effects of manipulation of task complexity in motor learning in this subjects. Method: Twenty-four post stroke subjects (experimental group - EG) and twenty-four healthy subjects (control group - CG) were selected and divided into two experiments: low complexity and high complexity. Two tasks were developed, low and high complexity, with changing not only on the number of elements, but also on the processing load required, which led to the experiment 1 and 2, respectively. This task was performed in a virtual reality environment, considering the displacement of the center of pressure for controlling the objects move on the screen. The design consisted of 150 trials for the acquisition phase, divided into three days of practice. After 4 days was performed retention test (RET) and transfer (TR). The TR test included the change in the direction of the objects movement. The dependent variables of both experiments were: score and runtime. For each dependent variable was conducting a two-way ANOVA with repeated measures (2 groups x 4 times) followed by Tukey post hoc. Results: In experiment 1, in regards to the variable score, both groups improved performance and kept in the retention test. There was no difference between the CG and EG. For the variable runtime, the CG differed from EG at all times (Beginning and end of the acquisition, RET and TR), with worse performance. In Experiment 2, both the variable score runtime, the EG differed from the CG at the end of acquisition and RET test. Conclusion: The post-stroke subjects were able to learn a low complexity task, but this learning was not generalized. As the higher complex task, post stroke subjects have not learned the task, being very impacted by the increase of complexity. Possibly the cognitive deficits related to planning and sequencing actions impacted the acquisition of the higher complex skill and not affected the low complexity task learning
773

Analyse de la complexité des programmes par interprétation sémantique / Program complexity analysis by semantics interpretation

Péchoux, Romain 14 November 2007 (has links)
Il existe de nombreuses approches développées par la communauté Implicit Computational Complexity (ICC) permettant d'analyser les ressources nécessaires à la bonne exécution des algorithmes. Dans cette thèse, nous nous intéressons plus particulièrement au contrôle des ressources à l'aide d'interprétations sémantiques. Après avoir rappelé brièvement la notion de quasi-interprétation ainsi que les différentes propriétés et caractérisations qui en découlent, nous présentons les différentes avancées obtenues dans l'étude de cet outil : nous étudions le problème de la synthèse qui consiste à trouver une quasi-interprétation pour un programme donné, puis, nous abordons la question de la modularité des quasi-interprétations. La modularité permet de diminuer la complexité de la procédure de synthèse et de capturer un plus grand nombre d'algorithmes. Après avoir mentionné différentes extensions des quasi-interprétations à des langages de programmation réactifs, bytecode ou d'ordre supérieur, nous introduisons la sup-interprétation. Cette notion généralise la quasi-interprétation et est utilisée dans des critères de contrôle des ressources afin d'étudier la complexité d'un plus grand nombre d'algorithmes dont des algorithmes sur des données infinies ou des algorithmes de type diviser pour régner. Nous combinons cette notion à différents critères de terminaison comme les ordres RPO, les paires de dépendance ou le size-change principle et nous la comparons à la notion de quasi-interprétation. En outre, après avoir caractérisé des petites classes de complexité parallèles, nous donnons quelques heuristiques permettant de synthétiser des sup-interprétations sans la propriété sous-terme, c'est à dire des sup-interprétations qui ne sont pas des quasi-interprétations. Enfin, dans un dernier chapitre, nous adaptons les sup-interprétations à des langages orientés-objet, obtenant ainsi différents critères pour contrôler les ressources d'un programme objet et de ses méthodes / There are several approaches developed by the Implicit Computational Complexity (ICC) community which try to analyze and control program resources. In this document, we focus our study on the resource control with the help of semantics interpretations. After introducing the notion of quasi-interpretation together with its distinct properties and characterizations, we show the results obtained in the study of such a tool: We study the synthesis problem which consists in finding a quasi-interpretation for a given program and we tackle the issue of quasi-interpretation modularity. Modularity allows to decrease the complexity of the synthesis procedure and to capture more algorithms. We present several extensions of quasi-interpretations to reactive programming, bytecode verification or higher-order programming. Afterwards, we introduce the notion of sup-interpretation. This notion strictly generalizes the one of quasi-interpretation and is used in distinct criteria in order to control the resources of more algorithms, including algorithms over infinite data and algorithms using a divide and conquer strategy. We combine sup-interpretations with distinct termination criteria, such as RPO orderings, dependency pairs or size-change principle, and we compare them to the notion of quasi-interpretation. Using the notion of sup-interpretation, we characterize small parallel complexity classes. We provide some heuristics for the sup-interpretation synthesis: we manage to synthesize sup-interpretations without the subterm property, that is, sup-interpretations which are not quasi-interpretations. Finally, we extend sup-interpretations to object-oriented programs, thus obtaining distinct criteria for resource control of object-oriented programs and their methods
774

Complexité d'ordre supérieur et analyse récursive / Higher order complexity and computable analysis

Férée, Hugo 10 December 2014 (has links)
Alors que la complexité des fonctions d'ordre 1 est bien définie et étudiée, il n'existe pas de notion satisfaisante à tout ordre. Une telle théorie existe déjà à l'ordre 2 et permet de définir une classe analogue aux fonctions calculables en temps polynomial usuelles. Cela est tout particulièrement intéressant dans le domaine de l'analyse récursive où l'on peut représenter, entre autres, les nombres et les fonctions réelles par des fonctions d'ordre 1. On peut alors remarquer un lien fort entre calculabilité et continuité, et aussi rapprocher la complexité avec certaines propriétés analytiques, ce que nous illustrons dans le cas des opérateurs réels. Nous prouvons cependant que, du point de vue de la complexité, les fonctions d'ordre 1 ne permettent pas de représenter fidèlement certains espaces mathématiques. Ce résultat appuie tout particulièrement la nécessité d'une théorie de la complexité d'ordre supérieur. Nous développons alors un modèle de calcul basé sur la sémantique des jeux, où l'application de deux fonctions est représentée par la confrontation de deux stratégies dans un jeu. En définissant la taille de telles stratégies, nous pouvons déduire une notion robuste et pertinente de complexité pour ces stratégies et donc pour les fonctions d'ordre supérieur. Nous définissons aussi une classe de fonctions calculables en temps polynomial qui paraît être un bon candidat pour définir une classe de complexité raisonnable à tout ordre / While first order complexity is well defined and studied, higher order lacks a satisfactory notion of complexity. Such a theory already exists at order 2 and provides a complexity class analogue to usual polynomial time computable functions. This is already especially interesting in the domain of computable analysis, where real numbers or real functions for example can be represented by first order functions. In particular, there is a clear link between computability and continuity, and we also illustrate in the case of real operators that complexity can be related to some analytical properties. However, we prove that, from a complexity point of view, some mathematical spaces can not be faithfully represented by order 1 functions and require higher order ones. This result underlines that there is a need to develop a notion of complexity at higher types which will be, in particular but not only, useful to computable analysis. We have developed a computational model for higher type sequential computations based on a game semantics approach, where the application of two functions is represented by the game opposing two strategies. By defining the size of such strategies, we are able to define a robust and meaningful notion of complexity at all types, together with a class of polynomial time computable higher order functionals which seems to be a good candidate for a class of feasible functionals at higher types
775

Partage de secret et théorie algorithmique de l'information / Secret Sharing and Algorithmic Information Theory

Kaced, Tarik 04 December 2012 (has links)
Notre travail sur le partage de secret se base sur les points de vue théoriques de la Théorie de l'Information de Shannon et de la Complexité de Kolmogorov. Nous allons expliquer comment ces trois sujets intimement liés.Les inégalité d'information jouent un rôle centrale dans ce manuscrit. Ce sont les inégalités pour l'entropie de Shannon, mais correspondent aussi aux inégalités pour la complexité de Kolmogorov.La complexité de Kolmogorov formalise l'idée d'aléatoire pour les chaînes de caractère. Ce sont là deux raisons qui justifient à elles seules la notion de partage de secret algorithmique dans le cadre de la Théorie Algorithmique de l'information (si l'on sait partager un secret aléatoire, on peut partager n'importe quel secret).Originalement étudié par sa définition combinatoire, le partage de secret a été plus tard généralisé par sa formulation par les quantités définies dans la théorie de l'information. Cette étape a permis l'utilisation des inégalités d'information et s'est révélée très importante dans la caractérisation desschémas de partage de secret efficaces.L'étude des inégalités d'information n'en est qu'à ses débuts. Nous y contribuons en introduisant la notion d'inégalité essentiellement conditionnelles, qui montre une fois de plus que ces inégalités ne sont pas encore complètement comprises. / Our work deals with secret sharing in the theoretical point of views of Shannon's Information Theory and Kolmogorov's Algorithmic Information Theory. We are going to explain how these three subjects are naturally deeply intertwined.Information inequalities play a central role in this text. They are the inequalities for Shannon entropy, but also they are in exact correspondence with the inequalities for Kolmogorov complexity. Kolmogorov complexity formalizes the idea of randomness for strings.These two reasons alone justify to consider the notion of secret sharing in the Algorithmic framework (if one can share a random secret one can share anything).Originally, secret sharing was first studied under the combinatorial lens, only later was it more generally formalized using information-theoretic measures. This step allowed the use of information inequalities which revealed to bevery important to understand the existence of secret-sharing schemes with respect to efficiency.The investigation of information inequalities is at its debut. We contribute to the subject by introducing the notion of essentially conditional inequalities, which shows once again that information inequalities are yet not fully understood.
776

Approximation et complexité paramétrée de problèmes d’optimisation dans les graphes : partitions et sous-graphes / Approximation and parameterized complexity of graph optimisation problems : partitions and subgraphs

Watrigant, Rémi 02 October 2014 (has links)
La théorie de la NP-complétude nous apprend que pour un certain nombre de problèmes d'optimisation, il est vain d'espérer un algorithme efficace calculant une solution optimale. Partant de ce constat, un moyen pour contourner cet obstacle est de réaliser un compromis sur chacun de ces critères, engendrant deux approches devenues classiques. La première, appelée approximation polynomiale, consiste à développer des algorithmes efficaces et retournant une solution proche d'une solution optimale. La seconde, appelée complexité paramétrée, consiste à développer des algorithmes retournant une solution optimale mais dont l'explosion combinatoire est capturée par un paramètre de l'entrée bien choisi. Cette thèse comporte deux objectifs. Dans un premier temps, nous proposons d'étudier et d'appliquer les méthodes classiques de ces deux domaines afin d'obtenir des résultats positifs et négatifs pour deux problèmes d'optimisation dans les graphes : un problème de partition appelé Sparsest k-Compaction, et un problème de recherche d'un sous-graphe avec une cardinalité fixée appelé Sparsest k-Subgraph. Dans un second temps, nous présentons comment les méthodes de ces deux domaines ont pu se combiner ces dernières années pour donner naissance au principe d'approximation paramétrée. En particulier, nous étudierons les liens entre approximation et algorithmes de noyaux. / The theory of NP-completeness tells us that for many optimization problems, there is no hope for finding an efficient algorithm computing an optimal solution. Based on this, two classical approaches have been developped to deal with these problems. The first one, called polynomial- time approximation, consists in designing efficient algorithms computing a solution that is close to an optimal one. The second one, called param- eterized complexity, consists in designing exact algorithms which com- binatorial explosion is captured by a carefully chosen parameter of the instance. The goal of this thesis is twofold. First, we study and apply classical methods from these two domains in order to obtain positive and negative results for two optimization problems in graphs: a partitioning problem called Sparsest k-Compaction, and a cardinality constraint subgraph problem called Sparsest k-Subgraph. Then, we present how the different methods from these two domains have been combined in recent years in a concept called parameterized approximation. In particular, we study the links between approximation and kernelization algorithms.
777

Approximation, complexité paramétrée et stratégies de résolution de problèmes d'affectation multidimensionnelle / Approximability, parameterized complexity and solving strategies of some multidimensional assignment problems

Duvillié, Guillerme 07 October 2016 (has links)
Au cours de la thèse, nous nous sommes intéressés aux problèmes d'empilement de wafers. Ces problèmes apparaissent lors de la fabrication de processeurs en 3 dimensions. Au cours du processus de fabrication, les puces électroniques doivent être empilées les unes sur les autres. Jusqu'à peu, ces dernières, une fois gravées sur des plaques de silicium appelées wafers, étaient découpées, puis triées afin d'écarter les puces défectueuses et enfin assemblées les unes entre elles.Cependant empiler les wafers plutôt que les puces présente de nombreux avantages techniques et financiers. Naturellement, étant impossible d'écarter les puces défectueuses sans découper la plaque de silice, le problème de la superposition d'une puce viable avec une puce défectueuse se pose. Une pile de puces, étant considérées comme défectueuse si elle contient ne serait-ce qu'une puce défectueuse, la superposition non réfléchie des wafers entre eux mènerait à un rendement désastreux.Afin de générer un nombre minimum de piles défectueuses, une "cartographie" de chaque wafer candidat à la superposition est réalisée lors d'une phase de test, permettant de situer les puces défectueuses sur le wafer. Une fois cette cartographie réalisée, l'objectif est de sélectionner les wafers qui seront assemblés ensembles de manière à faire correspondre les défauts de chacun des wafers.Ce problème peut être modélisé à l'aide d'un problème d'affectation multidimensionnelle. Chaque wafer est représenté par un vecteur comportant autant de composantes que de puces sur le wafer qu'il représente. Une composante égale à zéro matérialise une puce défectueuse tandis qu'un un matérialise une puce viable. Chaque lot de wafers est représenté par un lot de vecteurs. Formellement, une instance d'empilement de wafers est représenté par m ensembles de n vecteurs binaires p-dimensionnels. L'objectif est alors de réaliser n m-uplets disjoints contenant exactement un vecteur par ensemble. Ces m-uplets représenteront les piles. Chaque m-uplet peut être représenté par un vecteur binaire p-dimensionnels, chaque composante étant calculée en réalisant le ET binaire des composantes correspondantes des vecteurs qui composent le m-uplet. Autrement dit, une composante du vecteur représentant le m-uplet est égale à un si et seulement si tous les vecteurs ont cette composante égale à un. Et donc une pile de puces est viables si toutes les puces qui la composent sont viables. L'objectif est alors de minimiser le nombre de zéros ou de maximiser le nombre de un.La thèse comporte deux grandes parties. Une partie théorique abordant la complexité des différentes versions du problèmes en fonction de certains paramètres tels que m, n, p ou encore le nombre maximum de zéros par vecteurs. Nous montrons entre autre que ces problèmes peuvent être utilisés pour modéliser des problèmes plus classiques tels que Maximum Clique, Minimum Vertex Cover ou encore k-Dimensional Matching, permettant de prouver un certain nombre de résultats négatifs que ce soit d'un point de vue de la complexité classique, l'approximabilité ou la complexité paramétrée. Nous fournissons également des résultats positifs pour des cas particuliers du problème.Dans un second temps, nous nous intéressons à la résolution pratique du problème en fournissant et comparant un certain nombre de formulations en Programmation Linéaire en Nombres Entiers. Mais nous nous intéressons également aux performances en pratique de certaines heuristiques à garantie de performances détaillées dans la partie théorique. / In this thesis, we focused in the Wafer-to-Wafer integration problems. These problems come from IC manufacturing. During the production of three-dimensional processors, dies have to be superimposed. Until recent, the dies were engraved on a silicon disk called wafer, then were cut, tested and sorted to suppress faulty dies and lastly superimposed one to each other.However superimposing wafers instead of dies presents several technical and financial advantages. Since faulty dies can only be dismissed when cutting the wafer, superimpose two wafers can lead to superimpose a faulty die with a viable one. In this case, the resulting stack of dies is considered as faulty. It follows that a bad assignment between the wafers can lead to a disastrous yield.In order to minimize the number of faulty dies stacks, a "failure map" of each wafer is generated during a test phase. This map gives location of the faulty dies on the wafers. The objective is then to take advantage of this map to define an assignment of the wafers to each other in order to match as many failures as possible.This problem can be modelized with Multidimensional Assignment problems. Each wafer can be seen as a vector with as many dimensions as the number of dies engraved on it. A coordinate set to zero marks a faulty die while a coordinate set to one indicates a viable one. Each seat of wafers is represented by a set of vector. Formally, an instance of a Wafer-to-Wafer integration problem is represented by m sets of n p-dimensional vectors. The objective is then to partition the vectors into n disjoint m-tuples, each tuple containing exactly one vector per set. An m-tuple represents a stack of wafers. Every m-tuple can be represented by a p-dimensional vector. Each coordinate is computed by performing the bitwise AND between the corresponding coordinates of the vectors that compose the m-tuple. In other words, a coordinate of the representative vector is equal to one if and only if this coordinate is equal to one in every vector composing the tuple. It follows that a dies stack is viable if and only if all the dies composing the stack are viable. The objective is then to maximize the overall number of ones of to minimize the overall number of zeros.The first part of the thesis is a theoretical one. We study the complexity of the considered versions of the problem with regards to natural parameters such as m, n, p or the number of zeros per vector. We show that these problems can encode more classical problems such as Maximum Clique, Minimum Vertex Cover or k-Dimensional Matching. This leads to several negative results from computational complexity, approximability or even parameterized complexity point of view. We also provide several positive results for some specific cases of the problem.In a second part, we focus on the practical solving of the problem. We provide and compare several Integer Linear Programming formulations. We also focus on performances of some approximation algorithms that we detailed in the theoretical part.
778

(Méta)-noyaux constructifs et linéaires dans les graphes peu denses / Constructive and Linear (Meta)-Kernelisations on Sparse Graphs

Garnero, Valentin 04 July 2016 (has links)
En algorithmique et en complexité, la plus grande part de la recherche se base sur l’hypothèse que P ≠ NP (Polynomial time et Non deterministic Polynomial time), c'est-à-dire qu'il existe des problèmes dont la solution peut être vérifiée mais non construite en temps polynomial. Si cette hypothèse est admise, de nombreux problèmes naturels ne sont pas dans P (c'est-à-dire, n'admettent pas d'algorithme efficace), ce qui a conduit au développement de nombreuses branches de l'algorithmique. L'une d'elles est la complexité paramétrée. Elle propose des algorithmes exacts, dont l'analyse est faite en fonction de la taille de l'instance et d'un paramètre. Ce paramètre permet une granularité plus fine dans l'analyse de la complexité.Un algorithme sera alors considéré comme efficace s'il est à paramètre fixé, c'est-à-dire, lorsque sa complexité est exponentielle en fonction du paramètre et polynomiale en fonction de la taille de l'instance. Ces algorithmes résolvent les problèmes de la classe FPT (Fixed Parameter Tractable).L'extraction de noyaux est une technique qui permet, entre autre, d’élaborer des algorithmes à paramètre fixé. Elle peut être vue comme un pré-calcul de l'instance, avec une garantie sur la compression des données. Plus formellement, une extraction de noyau est une réduction polynomiale depuis un problème vers lui même, avec la contrainte supplémentaire que la taille du noyau (l'instance réduite) est bornée en fonction du paramètre. Pour obtenir l’algorithme à paramètre fixé, il suffit de résoudre le problème dans le noyau, par exemple par une recherche exhaustive (de complexité exponentielle, en fonction du paramètre). L’existence d'un noyau implique donc l'existence d'un algorithme à paramètre fixé, la réciproque est également vraie. Cependant, l’existence d'un algorithme à paramètre fixé efficace ne garantit pas un petit noyau, c'est a dire un noyau dont la taille est linéaire ou polynomiale. Sous certaines hypothèses, il existe des problèmes n’admettant pas de noyau (c'est-à-dire hors de FPT) et il existe des problèmes de FPT n’admettant pas de noyaux polynomiaux.Un résultat majeur dans le domaine des noyaux est la construction d'un noyau linéaire pour le problème Domination dans les graphes planaires, par Alber, Fellows et Niedermeier.Tout d'abord, la méthode de décomposition en régions proposée par Alber, Fellows et Niedermeier, a permis de construire de nombreux noyaux pour des variantes de Domination dans les graphes planaires. Cependant cette méthode comportait un certain nombre d’imprécisions, ce qui rendait les preuves invalides. Dans la première partie de notre thèse, nous présentons cette méthode sous une forme plus rigoureuse et nous l’illustrons par deux problèmes : Domination Rouge Bleue et Domination Totale.Ensuite, la méthode a été généralisée, d'une part, sur des classes de graphes plus larges (de genre borné, sans-mineur, sans-mineur-topologique), d'autre part, pour une plus grande variété de problèmes. Ces méta-résultats prouvent l’existence de noyaux linéaires ou polynomiaux pour tout problème vérifiant certaines conditions génériques, sur une classe de graphes peu denses. Cependant, pour atteindre une telle généralité, il a fallu sacrifier la constructivité des preuves : les preuves ne fournissent pas d'algorithme d'extraction constructif et la borne sur le noyau n'est pas explicite. Dans la seconde partie de notre thèse nous effectuons un premier pas vers des méta-résultats constructifs ; nous proposons un cadre général pour construire des noyaux linéaires en nous inspirant des principes de la programmation dynamique et d'un méta-résultat de Bodlaender, Fomin, Lokshtanov, Penninkx, Saurabh et Thilikos. / In the fields of Algorithmic and Complexity, a large area of research is based on the assumption that P ≠ NP(Polynomial time and Non deterministic Polynomial time), which means that there are problems for which a solution can be verified but not constructed in polynomial time. Many natural problems are not in P, which means, that they have no efficient algorithm. In order to tackle such problems, many different branches of Algorithmic have been developed. One of them is called Parametric Complexity. It consists in developing exact algorithms whose complexity is measured as a function of the size of the instance and of a parameter. Such a parameter allows a more precise analysis of the complexity. In this context, an algorithm will be considered to be efficient if it is fixed parameter tractable (fpt), that is, if it has a complexity which is exponential in the parameter and polynomial in the size of the instance. Problems that can be solved by such an algorithm form the FPT class.Kernelisation is a technical that produces fpt algorithms, among others. It can be viewed as a preprocessing of the instance, with a guarantee on the compression of the data. More formally, a kernelisation is a polynomial reduction from a problem to itself, with the additional constraint that the size of the kernel, the reduced instance, is bounded by a function of the parameter. In order to obtain an fpt algorithm, it is sufficient to solve the problem in the reduced instance, by brute-force for example (which has exponential complexity, in the parameter). Hence, the existence of a kernelisiation implies the existence of an fpt algorithm. It holds that the converse is true also. Nevertheless, the existence of an efficient fpt algorithm does not imply a small kernel, meaning a kernel with a linear or polynomial size. Under certain hypotheses, it can be proved that some problems can not have a kernel (that is, are not in FPT) and that some problems in FPT do not have a polynomial kernel.One of the main results in the field of Kernelisation is the construction of a linear kernel for the Dominating Set problem on planar graphs, by Alber, Fellows and Niedermeier.To begin with, the region decomposition method proposed by Alber, Fellows and Niedermeier has been reused many times to develop kernels for variants of Dominating Set on planar graphs. Nevertheless, this method had quite a few inaccuracies, which has invalidated the proofs. In the first part of our thesis, we present a more thorough version of this method and we illustrate it with two examples: Red Blue Dominating Set and Total Dominating Set.Next, the method has been generalised to larger classes of graphs (bounded genus, minor-free, topological-minor-free), and to larger families of problems. These meta-results prove the existence of a linear or polynomial kernel for all problems verifying some generic conditions, on a class of sparse graphs. As a price of generality, the proofs do not provide constructive algorithms and the bound on the size of the kernel is not explicit. In the second part of our thesis, we make a first step to constructive meta-results. We propose a framework to build linear kernels based on principles of dynamic programming and a meta-result of Bodlaender, Fomin, Lokshtanov, Penninkx, Saurabh and Thilikos.
779

Graph labeling and non-separating trees

Unknown Date (has links)
This dissertation studies two independent problems, one is about graph labeling and the other problem is related to connectivity condition in a simple graph. Graph labeling is a rapidly developing area of research in graph theory, having connections with a variety of application-oriented areas such as VLSI optimization, data structures and data representation. Furthermore, the connectivity conditions in a simple graphs may help us to study the new aspects of ad hoc networks, social networks and web graphs. In chapter 2, we study path systems, reduced path systems and how to construct a super edge-graceful tree with any number of edges using path systems. First, we give an algorithm to reduce a labeled path system to a smaller labeled path system of a different type. First, we investigate the cases (m, k) = (3; 5) and (m, k) = (4; 7), where m is the number of paths and 2k is the length of each path, and then we give a generalization for any k, m = 3 and m = 4. We also describe a procedure to construct a super-edge-graceful tree with any number of edges. / Includes bibliography. / Dissertation (Ph.D.)--Florida Atlantic University, 2014. / FAU Electronic Theses and Dissertations Collection
780

Design de Moda : framework para implementação de estratégias de inovação pelo Design no processo de desenvolvimento de produtos de moda em empresas de confecção do vestuário

Nicchelle, Keila Marina January 2018 (has links)
O Design, inserido na complexidade do atual ambiente social, cultural, produtivo e mercadológico se apresenta como uma atividade complexa, o que exige o estudo de estratégias de inovação direcionadas a resolução de problemas de projeto, aplicáveis ao Design de Moda. Nesse sentido, o presente estudo propõe a apresentação de um framework para implementação de estratégias de inovação pelo Design no processo de desenvolvimento de produtos de moda em empresas de confecção do vestuário, valendo-se da abordagem teórico-metodológica de Design Estratégico. Frente a essa proposta de estudo, a pesquisa foi desenvolvida em três fases sistêmicas: Epistemologia do Projeto, Praxiologia do Projeto e Pedagogia do Projeto. A primeira fase compreendeu a realização de uma Pesquisa Bibliográfica sobre o tema com o objetivo de refletir sobre a natureza complexa da atividade de Design sob a perspectiva da cultura de projeto, trazendo algumas abordagens conceituais e metodológicas em Design Estratégico e Design de Moda. A segunda fase compreendeu a realização de uma Pesquisa de Campo e de um Estudo de Caso com o objetivo identificar as práxis de Design adotadas em empresas de confecção do vestuário vinculadas ao Arranjo Produtivo Local Têxtil e do Vestuário (APL Polovest), localizado na região do Alto Uruguai Gaúcho. Inicialmente, buscou-se identificar a presença e as formas de atuação do Design nestas empresas, e posteriormente, diagnosticar a função do Design no processo de desenvolvimento de produtos de moda em três empresas com potencial para implementação de estratégias de inovação pelo Design. A terceira fase compreendeu a realização de uma Pesquisa Ação com o objetivo de implementar estratégias de inovação pelo Design no processo de desenvolvimento de produtos de moda nas três empresas que participaram da fase anterior do estudo. Os resultados obtidos nestas fases da pesquisa permitiram a reflexão sobre a teoria e a prática em Design de Moda, possibilitando a configuração de um framework para orientar a implementação de estratégias de inovação no processo de desenvolvimento de produtos de moda em empresas de confecção do vestuário. O framework propõe a construção de uma nova cultura de projeto nestas empresas, capitalizando um conjunto de conhecimentos em torno da ação projetual, a fim de direcionar à identificação, à análise e a resolução de problemas de projeto, inseridos na complexidade da atividade de Design. Portanto, espera-se que a aplicação do framework possa indicar oportunidades de inovação projetual, tornando estas empresas mais competitivas por meio de suas ofertas, com vistas ao fortalecimento do setor do vestuário na região do Alto Uruguai Gaúcho. / Design, inserted in the complexity of the current social, cultural, productive and market environment, represents a complex activity, requiring the study of innovation strategies directed to solving project problems, applicable to Fashion Design. In this regard, the present study proposes a Design framework for implementing innovation strategies in the development of fashion products in apparel manufacturing companies, drawing on the theoretical-methodological approach of Strategic Design. Given this study’s proposal, research was developed in three systemic phases: Project Epistemology, Project Praxeology and Project Pedagogy. The first phase involved a literature review on the theme, which contemplated the complex nature of the Design activity from a project culture perspective, conveying some conceptual and methodological approaches from Strategic and Fashion Design. The second phase involved a Field Survey and a Case Study, with the goal of identifying the Design praxis adopted by apparel manufacturing companies linked to the Local Textile and Clothing Productive Arrangement (APL Polovest), located in the Alto Uruguai Gaúcho region, southern Brazil. Initially, we sought to identify the presence and proceedings of Design in these companies and, later, to establish the role of Design in the process of fashion products development in three companies, selected for their potential to implement innovation strategies through Design. The third phase involved the execution of an Action Research to implement innovation strategies through Design in the process of developing fashion products in the companies from the previous phase. Results obtained in these research phases supported a critical reflection on Fashion Design theory and practice, as well as the configuration of a framework to guide the implementation of Design innovation strategies in the process of developing fashion products in apparel manufacturing companies. The framework proposes the construction of a novel project culture in these companies, using a set of knowledge capitalized around the project action, guiding the identification, analysis and resolution of project problems inserted into the Design action complexity. Therefore, we may expect the implementation of this framework to reveal opportunities for project innovation, making these companies more competitive through their offerings, with focus on the strengthening of the clothing sector in the Alto Uruguai Gaúcho region.

Page generated in 0.1116 seconds