771 |
Dimensão global forte e complexidade na categoria derivada / Strong global dimension and complexity in the derived categoryMedeiros, Francisco Batista de 28 November 2014 (has links)
Apresentamos neste trabalho uma definição de complexidade na categoria derivada de complexos (limitados superiormente) de módulos sobre uma k-álgebra de dimensão finita. Um dos resultados que conseguimos foi uma relação entre a complexidade de objetos indecomponíveis e a noção de dimensão global forte. Mais especificamente, mostramos que a existência de um objeto indecomponível na categoria derivada limitada superiormente com complexidade não nula é condição suficiente para que a respectiva álgebra tenha dimensão global forte infinita. Também investigamos se existe uma relação entre as dimensões global e global forte da classe das álgebras shod (Coelho e Lanzilotta, 2009). Fomos motivados pela caracterização da classe das álgebras quase inclinadas (Happel, Reiten e Smalo, 1996) em termos da sua dimensão global forte, dada por D. Happel e D. Zacharia (2008), e pelo fato das álgebras shod serem uma generalização das álgebras quase inclinadas. Nossa conclusão foi que não existe, em geral, uma caracterização das álgebras shod em termos de sua dimensão global forte. Isto é, mostramos que para cada inteiro d > 2 existe uma álgebra shod estrita cuja dimensão global forte é igual a d. / We introduce in this thesis a definition of complexity in the derived category of bounded above complexes of modules over a finite dimensional k-algebra. One of our result shows a relationship between the complexity of indecomposable objects and the notion of strong global dimension. More specifically, we prove that the existence of an indecomposable object in the category derived bounded above whose complexity is not zero is a sufficient condition for corresponding algebra being of infinite strong global dimension. We also investigate the existence of a relationship between the global dimension and the strong global dimension of shod algebras (Coelho and Lanzilotta, 1999). Our motivation came from characterization of quasitilted algebras (Happel, Reiten and Smalo, 1996) by its strong global dimension, given by D. Happel and D. Zacharia (2008), and from the fact that shod algebras are a generalization of quasitilted algebras. Our conclusion was that there is not in general a characterization of shod algebras in terms of its strong global dimension. This conclusion comes from the fact that we showed that for each integer d > 2 there exists a strictly shod algebra whose strong global dimension is d.
|
772 |
Dimensão fractal e métodos quantitativos aplicados ao estudo de comunidades do macrobentos marinhos / Fractal Dimension and quantitative methods applied to the study of marine communitiesRodrigues, Carina Waiteman 19 October 2017 (has links)
A macroalga Sargassum C. Agardh é de reconhecida importância ecológica nos ecossistemas costeiros, particularmente nas comunidades de costões rochosos de regiões tropicais e temperadas quentes. Está amplamente distribuída na costa sudeste brasileira, sendo frequente em costões rochosos de locais moderados ou protegidos do embate de ondas. Nesses ambientes pode formar bancos densos e extensos, estruturalmente complexos, capaz de prover microhabitats variados para uma grande diversidade de organismos. Os bancos de Sargassum são suscetíveis a mudanças sazonais na sua biomassa e/ou estado fisiológico relacionados a fatores abióticos e bióticos, sendo que essas variações podem influenciar drasticamente a distribuição e densidade dos organismos associados às algas. A complexidade desse substrato tem sido avaliada por meio de várias medidas, sendo quantitativas, como peso, as mais utilizadas. O presente trabalho tem como objetivo avaliar o desempenho de diferentes medidas quantitativas da complexidade estrutural de bancos de Sargassum filipendula i) no desenvolvimento temporal da complexidade estrutural da alga em dois ambientes hidrodinamicamente diferentes e ii) na composição e distribuição dos organismos epifaunais. Foram coletadas 15 frondes de S. filipendula por mês, durante 13 meses, nas praias da Fortaleza e do Lamberto, Ubatuba, SP. A fauna presente nessas frondes foi obtida através de lavagem e peneiramento contínuo. O período de amostragem caracterizou-se por ser atípico, apresentando altas temperaturas ao longo de todo o ano e um verão com baixa pluviometria. Este fato influenciou fortemente a variação sazonal do S. filipendula e epífitas associadas, ocasionando elevados valores de peso seco entre a primavera e verão. Os resultados das comparações das medidas analisadas mostraram que um único parâmetro não é representativo da complexidade estrutural da alga, uma vez que cada medida apresentou diferenças em relação à abundância e diversidade da fauna. Estes dois indicadores também mostraram correlação positiva com todos os parâmetros de complexidade do substrato. Houve diferença significativa entre as praias, e as frondes do Lamberto foram estruturalmente mais complexas, suportando a maior abundância. Contudo, foram as frondes do Fortaleza que exibiram os maiores valores de riqueza de grupos. Discute-se o emprego de mais uma de uma medida quantitativa para mensurar a complexidade estrutural do habitat. / The Sargassum C. Agardh macroalgae is of recognized ecological importance in coastal ecosystems, particularly in the rocky coastal communities of tropical and warm temperate regions. It is widely distributed on the southeast coast of Brazil, being frequent in rocky shores of moderate locations or protected from the impacts of waves. In these environments can form dense and extensive banks, structurally complex, capable of providing microhabitats varied for a great diversity of organisms. The Sargassum banks are susceptible to seasonal changes in their biomass and/or physiological status related to abiotic and biotic factors, and these variations can drastically influence the distribution and density of organisms associated with algae. The complexity of this substrate has been evaluated by means of several measures, being quantitative, as dry weight, the most used. The present work aims to evaluate the performance of different quantitative measures of the structural complexity of Sargassum filipendula banks i) in the temporal development of algae structural complexity in two hydrodynamically different environments and ii) in the composition and distribution of epifaunal organisms. Fifteen fronds of S. filipendula were collected per month, during 13 months, on the beaches of Fortaleza and Lamberto, Ubatuba, SP. The fauna present in these fronds was obtained through continuous washing and sieving. The sampling period was characterized by being atypical, presenting high temperatures throughout the year and a summer with low rainfall. This fact strongly influenced the seasonal variation of S. filipendula and associated epiphytes, causing high values of dry weight between spring and summer. The results of the comparisons of the measures analyzed showed that a single parameter is not representative of the structural complexity of the algae, since each measure presented differences in relation to the abundance and diversity of the fauna. These two indicators also showed a positive correlation with all parameters of substrate complexity. There was a significant difference between the beaches, and the Lambert fronds were structurally more complex, bearing the greatest abundance. However, it was the fronds of Fortaleza that exhibited the highest values of group richness. We discuss the use of one more of a quantitative measure to measure the structural complexity of the habitat.
|
773 |
A formação de bacharéis em gestão ambiental: complexidade e os desafios socioambientais contemporâneos / Undergraduate courses on environmental management: a complex subject involving many contemporary socio-environmental challengesMorgado, Renato Pellegrini 26 June 2012 (has links)
Esta pesquisa buscou analisar os cursos de bacharelado em Gestão Ambiental e contribuir com a construção de diretrizes para essa formação. Para isso realizaram-se uma revisão bibliográfica, que fundamentasse as diretrizes dessa formação, a análise dos projetos pedagógicos e das estruturas curriculares e entrevistas com coordenadores de quatro cursos, das seguintes instituições: ESALQ/USP, EACH/USP, UFPR e UNIPAMPA. As referências consultadas sinalizam que os desafios socioambientais demandam, para sua compreensão e gestão, um pensamento complexo, que evidencie as relações e inter-relações existentes entre o meio natural e o social. Nessa concepção, devem-se privilegiar estratégias de ensino e de aprendizagem que coloquem o estudante no centro do processo educativo e que utilizem a realidade dos problemas ambientais como espaço formativo. O curso da UFPR foi o que mais se aproximou dessas características, enquanto os da ESALQ/USP e da UNIPAMPA se distanciaram delas e o da EACH ocupou uma posição intermediária. / The objective of this study was to analyze undergraduate courses available in Brazil on Environmental Management and to contribute to establishing guidelines to improve them. The author carried out a literature review, with emphasis on the respective educational projects and curricula, and interviewed course coordinators of four Brazilian institutions of higher learning: ESALQ/USP, EACH/USP, UFPR and UNIPAMPA. The results of this survey show that, in order to properly comprehend and find solutions for socio-environmental challenges, the environmental management professional must have a clear understanding of complex issues involving the relations and inter-relations between the natural and the social environments. Along these lines, high priority should be given to teaching-learning strategies that place the student at the center of the learning process and that use environmental problems to provide hands-on experience. The course offered by UFPR showed the closest adherence to this approach, while ESALQ/USP and UNIPAMPA courses were farther away from it. The course offered by EACH/USP occupied an intermediary position.
|
774 |
A influência dos padrões para sistemas de gestão no desempenho das empresas / The influence of management systems standards on the performance of firmsGiovannini, Fabrizio 18 November 2008 (has links)
As empresas precisam de sistemas de regras, formais ou não, para serem administradas. Os sistemas de regras para a administração são aqui chamados de sistemas de gestão. Entre as funções do administrador está, portanto, a construção de sistemas de gestão que atendam às necessidades dos stakeholders da empresa e que assegurem a sobrevivência e a prosperidade da mesma. O problema pode ser dividido em duas partes: que tipo de sistema é o melhor para a organização e qual a melhor forma de implantá-lo. No corpo do conhecimento da administração há inumeráveis soluções propostas para esses problemas, embasadas em diferentes abordagens e teorias, e dirigidas a diferentes tipos de empresas e contextos. Entre os esforços para a busca de soluções cada vez melhores ganhou ímpeto, nos últimos 15 anos, um movimento de criação de Padrões para Sistemas de Gestão (PSGs). O padrão que se tornou mais conhecido é a norma ISO 9.000, voltado à criação de sistemas de gestão da qualidade. Depois desse padrão pioneiro, outros foram e estão sendo criados com o objetivo de abranger a maioria das áreas dos sistemas de gestão: meio ambiente, saúde e segurança do trabalho, responsabilidade social, governança, demonstrações financeiras e contabilidade, entre outros. Apesar de um número crescente de organizações estar optando por usar estes padrões para criar seus sistemas de gestão, há dúvidas e controvérsias sobre sua eficiência e eficácia. Esta tese investiga a influência dos PSGs sobre o desempenho das empresas. Através de uma pesquisa quantitativa compara o desempenho, ao longo de 4 anos, de empresas com diferentes níveis de adoção de PSGs. A pesquisa foi feita com todas as indústrias de autopeças brasileiras permitindo, dessa forma, certo controle das variáveis intervenientes. A qualidade dos dados contou com a inestimável colaboração da empresa de informações comerciais Serasa S.A. que forneceu os dados de desempenho financeiro e de outras variáveis intervenientes. Utilizando-se técnicas estatísticas uni e multivariadas, entre elas a Modelagem de Equações Estruturantes, rejeitou-se a hipótese nula de que não há influência dos PSGs sobre o desempenho das empresas. Esse resultado, em conjunto com outras evidências estatísticas, permite afirmar dentro dos limites do modelo conceitual da pesquisa que a adoção de PSGs influencia positivamente o desempenho das indústrias de autopeças brasileiras. Há também indícios de que empresas pequenas e médias se beneficiam mais da adoção de PSGs do que as grandes empresas. Portanto, Padrões para Sistemas de Gestão podem ser recomendados como uma ferramenta potencialmente eficaz, especialmente para empresas industriais. / Firms, to be managed, need systems of rules, formal or not. The systems of rules for business management are called here management systems. Among the functions of the manager there is, therefore, the construction of management systems that fulfill the firms stakeholders necessities and that secure the survival and prosperity of the firm. The problem can be divided into two parts: what kind of system is the best for the organization and which is the best way to implement it. In the administrations body of knowledge there are innumerable proposed solutions to these problems, based in different approaches and theories, and directed to different kinds of firms and contexts. Among the efforts for the search of better solutions it is getting stronger, in the last 15 years, a movement for the creation of Management Systems Standards. The Standard that became mostly known is the ISO 9.000 norm, dedicated to the creation of quality management systems. After this pioneer Standard, other where and are being created with the goal of enclosing the majority of the areas where management systems are applied: environment, health and safety, social responsibility, governance, finance and accounting, among others. Although a growing number of organizations is opting to use these Standards to create their own management systems, there are doubts and controversies regarding its efficiency and effectiveness. This thesis investigates the influence of Management Systems Standards over the firms performance. Trough a quantitative research it compares the performance, in a 4 years period, of firms with different levels of Management Systems Standard. The research was done with all the auto parts manufacturers of Brazil allowing, in this manner, a measure of control over the intervenient variables. The quality of the data is due to the inestimable collaboration of the commercial information firm Serasa S.A. that supplied the data of financial performance and other intervenient variables. Using uni and multivariate statistical techniques, the null hypothesis that there is no influence of Management Systems Standards over firms performance was rejected. This result, joined with other statistical evidence, allows affirming within the limits of the conceptual framework that the adoption of Management Systems Standards positively influences the performance of the Brazilian auto parts manufacturers. There are also clues that small and medium companies benefit more from the adoption of Management System Standards than large firms. Therefore, Management Systems Standards can be recommended as a potentially effective tool, especially for manufacturing firms.
|
775 |
Aprendizagem de uma habilidade motora com diferentes níveis de complexidade em sujeitos pós acidente vascular cerebral / Learning a motor skill task with different levels of complexity in post stroke subjectsPalma, Gisele Carla dos Santos 17 November 2016 (has links)
Introdução: Ainda não é conhecido na literatura se os sujeitos pós-AVC têm a capacidade de aprender ações complexas. Os estudos existentes apresentam falhas metodológicas, assim como na implementação da complexidade da tarefa. Este trabalho investigou os efeitos da manipulação da complexidade da tarefa na aprendizagem motora desta população. Método: Vinte e quatro sujeitos pós-AVC (Grupo experimental - GE) e vinte e quatro sujeitos saudáveis (Grupo controle - GC) foram selecionados e divididos em dois experimentos: baixa complexidade e alta complexidade. Foram desenvolvidas duas tarefas denominadas de baixa e de alta complexidade, com variação não só no número de elementos, mas também na carga de processamento exigida para a execução da mesma, as quais deram origem ao experimento 1 e 2, respectivamente. Esta tarefa era executada em ambiente de realidade virtual, a partir do deslocamento do centro de pressão para controle dos objetos móveis na tela. O delineamento foi constituído de 150 tentativas para a fase de aquisição, dividida em 3 dias de prática. Após 4 dias sem prática foi realizado o teste de retenção (RET) e transferência (TR). A alteração realizada no teste de TR foi na direção do deslocamento dos objetos. As variáveis dependentes de ambos experimentos foram: pontuação e tempo de execução. Para cada variável dependente foi conduzido uma Anova two-way com medidas repetidas (2 grupos x 4 momentos) seguido de post hoc de Tukey para identificar os momentos das mudanças. Resultados: No experimento 1, para a variável pontuação ambos os grupos melhoraram o desempenho ao longo da aquisição e o mantiveram no teste de retenção. Não houve diferença entre o GC e GE. Para a variável tempo de execução, o GC diferenciou-se do GE em todos os momentos (Início e final da aquisição, teste de retenção e transferência), apresentando pior desempenho. No experimento 2, tanto para a variável pontuação, quanto para tempo de execução o GE diferenciou-se do GC no final da aquisição e no teste de RET. Conclusão: Os sujeitos pós-AVC foram capazes de aprender a tarefa de baixa complexidade, mas esta aprendizagem não foi passível de generalização. Quanto a tarefa de alta complexidade, os sujeitos do GE não aprenderam a tarefa, sendo muito impactados pelo aumento de complexidade. Possivelmente, os déficits cognitivos relacionados ao planejamento e sequenciamento de ações impactaram na aquisição da habilidade de alta complexidade e não prejudicaram a aprendizagem da tarefa de baixa complexidade / Introduction: It is not yet known in the literature wether post stroke subjects have the ability to learn complex actions. The studies have methodological flaws as well as the implementation of the complexity of the task. This study aimed to investigate the effects of manipulation of task complexity in motor learning in this subjects. Method: Twenty-four post stroke subjects (experimental group - EG) and twenty-four healthy subjects (control group - CG) were selected and divided into two experiments: low complexity and high complexity. Two tasks were developed, low and high complexity, with changing not only on the number of elements, but also on the processing load required, which led to the experiment 1 and 2, respectively. This task was performed in a virtual reality environment, considering the displacement of the center of pressure for controlling the objects move on the screen. The design consisted of 150 trials for the acquisition phase, divided into three days of practice. After 4 days was performed retention test (RET) and transfer (TR). The TR test included the change in the direction of the objects movement. The dependent variables of both experiments were: score and runtime. For each dependent variable was conducting a two-way ANOVA with repeated measures (2 groups x 4 times) followed by Tukey post hoc. Results: In experiment 1, in regards to the variable score, both groups improved performance and kept in the retention test. There was no difference between the CG and EG. For the variable runtime, the CG differed from EG at all times (Beginning and end of the acquisition, RET and TR), with worse performance. In Experiment 2, both the variable score runtime, the EG differed from the CG at the end of acquisition and RET test. Conclusion: The post-stroke subjects were able to learn a low complexity task, but this learning was not generalized. As the higher complex task, post stroke subjects have not learned the task, being very impacted by the increase of complexity. Possibly the cognitive deficits related to planning and sequencing actions impacted the acquisition of the higher complex skill and not affected the low complexity task learning
|
776 |
Analyse de la complexité des programmes par interprétation sémantique / Program complexity analysis by semantics interpretationPéchoux, Romain 14 November 2007 (has links)
Il existe de nombreuses approches développées par la communauté Implicit Computational Complexity (ICC) permettant d'analyser les ressources nécessaires à la bonne exécution des algorithmes. Dans cette thèse, nous nous intéressons plus particulièrement au contrôle des ressources à l'aide d'interprétations sémantiques. Après avoir rappelé brièvement la notion de quasi-interprétation ainsi que les différentes propriétés et caractérisations qui en découlent, nous présentons les différentes avancées obtenues dans l'étude de cet outil : nous étudions le problème de la synthèse qui consiste à trouver une quasi-interprétation pour un programme donné, puis, nous abordons la question de la modularité des quasi-interprétations. La modularité permet de diminuer la complexité de la procédure de synthèse et de capturer un plus grand nombre d'algorithmes. Après avoir mentionné différentes extensions des quasi-interprétations à des langages de programmation réactifs, bytecode ou d'ordre supérieur, nous introduisons la sup-interprétation. Cette notion généralise la quasi-interprétation et est utilisée dans des critères de contrôle des ressources afin d'étudier la complexité d'un plus grand nombre d'algorithmes dont des algorithmes sur des données infinies ou des algorithmes de type diviser pour régner. Nous combinons cette notion à différents critères de terminaison comme les ordres RPO, les paires de dépendance ou le size-change principle et nous la comparons à la notion de quasi-interprétation. En outre, après avoir caractérisé des petites classes de complexité parallèles, nous donnons quelques heuristiques permettant de synthétiser des sup-interprétations sans la propriété sous-terme, c'est à dire des sup-interprétations qui ne sont pas des quasi-interprétations. Enfin, dans un dernier chapitre, nous adaptons les sup-interprétations à des langages orientés-objet, obtenant ainsi différents critères pour contrôler les ressources d'un programme objet et de ses méthodes / There are several approaches developed by the Implicit Computational Complexity (ICC) community which try to analyze and control program resources. In this document, we focus our study on the resource control with the help of semantics interpretations. After introducing the notion of quasi-interpretation together with its distinct properties and characterizations, we show the results obtained in the study of such a tool: We study the synthesis problem which consists in finding a quasi-interpretation for a given program and we tackle the issue of quasi-interpretation modularity. Modularity allows to decrease the complexity of the synthesis procedure and to capture more algorithms. We present several extensions of quasi-interpretations to reactive programming, bytecode verification or higher-order programming. Afterwards, we introduce the notion of sup-interpretation. This notion strictly generalizes the one of quasi-interpretation and is used in distinct criteria in order to control the resources of more algorithms, including algorithms over infinite data and algorithms using a divide and conquer strategy. We combine sup-interpretations with distinct termination criteria, such as RPO orderings, dependency pairs or size-change principle, and we compare them to the notion of quasi-interpretation. Using the notion of sup-interpretation, we characterize small parallel complexity classes. We provide some heuristics for the sup-interpretation synthesis: we manage to synthesize sup-interpretations without the subterm property, that is, sup-interpretations which are not quasi-interpretations. Finally, we extend sup-interpretations to object-oriented programs, thus obtaining distinct criteria for resource control of object-oriented programs and their methods
|
777 |
Complexité d'ordre supérieur et analyse récursive / Higher order complexity and computable analysisFérée, Hugo 10 December 2014 (has links)
Alors que la complexité des fonctions d'ordre 1 est bien définie et étudiée, il n'existe pas de notion satisfaisante à tout ordre. Une telle théorie existe déjà à l'ordre 2 et permet de définir une classe analogue aux fonctions calculables en temps polynomial usuelles. Cela est tout particulièrement intéressant dans le domaine de l'analyse récursive où l'on peut représenter, entre autres, les nombres et les fonctions réelles par des fonctions d'ordre 1. On peut alors remarquer un lien fort entre calculabilité et continuité, et aussi rapprocher la complexité avec certaines propriétés analytiques, ce que nous illustrons dans le cas des opérateurs réels. Nous prouvons cependant que, du point de vue de la complexité, les fonctions d'ordre 1 ne permettent pas de représenter fidèlement certains espaces mathématiques. Ce résultat appuie tout particulièrement la nécessité d'une théorie de la complexité d'ordre supérieur. Nous développons alors un modèle de calcul basé sur la sémantique des jeux, où l'application de deux fonctions est représentée par la confrontation de deux stratégies dans un jeu. En définissant la taille de telles stratégies, nous pouvons déduire une notion robuste et pertinente de complexité pour ces stratégies et donc pour les fonctions d'ordre supérieur. Nous définissons aussi une classe de fonctions calculables en temps polynomial qui paraît être un bon candidat pour définir une classe de complexité raisonnable à tout ordre / While first order complexity is well defined and studied, higher order lacks a satisfactory notion of complexity. Such a theory already exists at order 2 and provides a complexity class analogue to usual polynomial time computable functions. This is already especially interesting in the domain of computable analysis, where real numbers or real functions for example can be represented by first order functions. In particular, there is a clear link between computability and continuity, and we also illustrate in the case of real operators that complexity can be related to some analytical properties. However, we prove that, from a complexity point of view, some mathematical spaces can not be faithfully represented by order 1 functions and require higher order ones. This result underlines that there is a need to develop a notion of complexity at higher types which will be, in particular but not only, useful to computable analysis. We have developed a computational model for higher type sequential computations based on a game semantics approach, where the application of two functions is represented by the game opposing two strategies. By defining the size of such strategies, we are able to define a robust and meaningful notion of complexity at all types, together with a class of polynomial time computable higher order functionals which seems to be a good candidate for a class of feasible functionals at higher types
|
778 |
Partage de secret et théorie algorithmique de l'information / Secret Sharing and Algorithmic Information TheoryKaced, Tarik 04 December 2012 (has links)
Notre travail sur le partage de secret se base sur les points de vue théoriques de la Théorie de l'Information de Shannon et de la Complexité de Kolmogorov. Nous allons expliquer comment ces trois sujets intimement liés.Les inégalité d'information jouent un rôle centrale dans ce manuscrit. Ce sont les inégalités pour l'entropie de Shannon, mais correspondent aussi aux inégalités pour la complexité de Kolmogorov.La complexité de Kolmogorov formalise l'idée d'aléatoire pour les chaînes de caractère. Ce sont là deux raisons qui justifient à elles seules la notion de partage de secret algorithmique dans le cadre de la Théorie Algorithmique de l'information (si l'on sait partager un secret aléatoire, on peut partager n'importe quel secret).Originalement étudié par sa définition combinatoire, le partage de secret a été plus tard généralisé par sa formulation par les quantités définies dans la théorie de l'information. Cette étape a permis l'utilisation des inégalités d'information et s'est révélée très importante dans la caractérisation desschémas de partage de secret efficaces.L'étude des inégalités d'information n'en est qu'à ses débuts. Nous y contribuons en introduisant la notion d'inégalité essentiellement conditionnelles, qui montre une fois de plus que ces inégalités ne sont pas encore complètement comprises. / Our work deals with secret sharing in the theoretical point of views of Shannon's Information Theory and Kolmogorov's Algorithmic Information Theory. We are going to explain how these three subjects are naturally deeply intertwined.Information inequalities play a central role in this text. They are the inequalities for Shannon entropy, but also they are in exact correspondence with the inequalities for Kolmogorov complexity. Kolmogorov complexity formalizes the idea of randomness for strings.These two reasons alone justify to consider the notion of secret sharing in the Algorithmic framework (if one can share a random secret one can share anything).Originally, secret sharing was first studied under the combinatorial lens, only later was it more generally formalized using information-theoretic measures. This step allowed the use of information inequalities which revealed to bevery important to understand the existence of secret-sharing schemes with respect to efficiency.The investigation of information inequalities is at its debut. We contribute to the subject by introducing the notion of essentially conditional inequalities, which shows once again that information inequalities are yet not fully understood.
|
779 |
Approximation et complexité paramétrée de problèmes d’optimisation dans les graphes : partitions et sous-graphes / Approximation and parameterized complexity of graph optimisation problems : partitions and subgraphsWatrigant, Rémi 02 October 2014 (has links)
La théorie de la NP-complétude nous apprend que pour un certain nombre de problèmes d'optimisation, il est vain d'espérer un algorithme efficace calculant une solution optimale. Partant de ce constat, un moyen pour contourner cet obstacle est de réaliser un compromis sur chacun de ces critères, engendrant deux approches devenues classiques. La première, appelée approximation polynomiale, consiste à développer des algorithmes efficaces et retournant une solution proche d'une solution optimale. La seconde, appelée complexité paramétrée, consiste à développer des algorithmes retournant une solution optimale mais dont l'explosion combinatoire est capturée par un paramètre de l'entrée bien choisi. Cette thèse comporte deux objectifs. Dans un premier temps, nous proposons d'étudier et d'appliquer les méthodes classiques de ces deux domaines afin d'obtenir des résultats positifs et négatifs pour deux problèmes d'optimisation dans les graphes : un problème de partition appelé Sparsest k-Compaction, et un problème de recherche d'un sous-graphe avec une cardinalité fixée appelé Sparsest k-Subgraph. Dans un second temps, nous présentons comment les méthodes de ces deux domaines ont pu se combiner ces dernières années pour donner naissance au principe d'approximation paramétrée. En particulier, nous étudierons les liens entre approximation et algorithmes de noyaux. / The theory of NP-completeness tells us that for many optimization problems, there is no hope for finding an efficient algorithm computing an optimal solution. Based on this, two classical approaches have been developped to deal with these problems. The first one, called polynomial- time approximation, consists in designing efficient algorithms computing a solution that is close to an optimal one. The second one, called param- eterized complexity, consists in designing exact algorithms which com- binatorial explosion is captured by a carefully chosen parameter of the instance. The goal of this thesis is twofold. First, we study and apply classical methods from these two domains in order to obtain positive and negative results for two optimization problems in graphs: a partitioning problem called Sparsest k-Compaction, and a cardinality constraint subgraph problem called Sparsest k-Subgraph. Then, we present how the different methods from these two domains have been combined in recent years in a concept called parameterized approximation. In particular, we study the links between approximation and kernelization algorithms.
|
780 |
Approximation, complexité paramétrée et stratégies de résolution de problèmes d'affectation multidimensionnelle / Approximability, parameterized complexity and solving strategies of some multidimensional assignment problemsDuvillié, Guillerme 07 October 2016 (has links)
Au cours de la thèse, nous nous sommes intéressés aux problèmes d'empilement de wafers. Ces problèmes apparaissent lors de la fabrication de processeurs en 3 dimensions. Au cours du processus de fabrication, les puces électroniques doivent être empilées les unes sur les autres. Jusqu'à peu, ces dernières, une fois gravées sur des plaques de silicium appelées wafers, étaient découpées, puis triées afin d'écarter les puces défectueuses et enfin assemblées les unes entre elles.Cependant empiler les wafers plutôt que les puces présente de nombreux avantages techniques et financiers. Naturellement, étant impossible d'écarter les puces défectueuses sans découper la plaque de silice, le problème de la superposition d'une puce viable avec une puce défectueuse se pose. Une pile de puces, étant considérées comme défectueuse si elle contient ne serait-ce qu'une puce défectueuse, la superposition non réfléchie des wafers entre eux mènerait à un rendement désastreux.Afin de générer un nombre minimum de piles défectueuses, une "cartographie" de chaque wafer candidat à la superposition est réalisée lors d'une phase de test, permettant de situer les puces défectueuses sur le wafer. Une fois cette cartographie réalisée, l'objectif est de sélectionner les wafers qui seront assemblés ensembles de manière à faire correspondre les défauts de chacun des wafers.Ce problème peut être modélisé à l'aide d'un problème d'affectation multidimensionnelle. Chaque wafer est représenté par un vecteur comportant autant de composantes que de puces sur le wafer qu'il représente. Une composante égale à zéro matérialise une puce défectueuse tandis qu'un un matérialise une puce viable. Chaque lot de wafers est représenté par un lot de vecteurs. Formellement, une instance d'empilement de wafers est représenté par m ensembles de n vecteurs binaires p-dimensionnels. L'objectif est alors de réaliser n m-uplets disjoints contenant exactement un vecteur par ensemble. Ces m-uplets représenteront les piles. Chaque m-uplet peut être représenté par un vecteur binaire p-dimensionnels, chaque composante étant calculée en réalisant le ET binaire des composantes correspondantes des vecteurs qui composent le m-uplet. Autrement dit, une composante du vecteur représentant le m-uplet est égale à un si et seulement si tous les vecteurs ont cette composante égale à un. Et donc une pile de puces est viables si toutes les puces qui la composent sont viables. L'objectif est alors de minimiser le nombre de zéros ou de maximiser le nombre de un.La thèse comporte deux grandes parties. Une partie théorique abordant la complexité des différentes versions du problèmes en fonction de certains paramètres tels que m, n, p ou encore le nombre maximum de zéros par vecteurs. Nous montrons entre autre que ces problèmes peuvent être utilisés pour modéliser des problèmes plus classiques tels que Maximum Clique, Minimum Vertex Cover ou encore k-Dimensional Matching, permettant de prouver un certain nombre de résultats négatifs que ce soit d'un point de vue de la complexité classique, l'approximabilité ou la complexité paramétrée. Nous fournissons également des résultats positifs pour des cas particuliers du problème.Dans un second temps, nous nous intéressons à la résolution pratique du problème en fournissant et comparant un certain nombre de formulations en Programmation Linéaire en Nombres Entiers. Mais nous nous intéressons également aux performances en pratique de certaines heuristiques à garantie de performances détaillées dans la partie théorique. / In this thesis, we focused in the Wafer-to-Wafer integration problems. These problems come from IC manufacturing. During the production of three-dimensional processors, dies have to be superimposed. Until recent, the dies were engraved on a silicon disk called wafer, then were cut, tested and sorted to suppress faulty dies and lastly superimposed one to each other.However superimposing wafers instead of dies presents several technical and financial advantages. Since faulty dies can only be dismissed when cutting the wafer, superimpose two wafers can lead to superimpose a faulty die with a viable one. In this case, the resulting stack of dies is considered as faulty. It follows that a bad assignment between the wafers can lead to a disastrous yield.In order to minimize the number of faulty dies stacks, a "failure map" of each wafer is generated during a test phase. This map gives location of the faulty dies on the wafers. The objective is then to take advantage of this map to define an assignment of the wafers to each other in order to match as many failures as possible.This problem can be modelized with Multidimensional Assignment problems. Each wafer can be seen as a vector with as many dimensions as the number of dies engraved on it. A coordinate set to zero marks a faulty die while a coordinate set to one indicates a viable one. Each seat of wafers is represented by a set of vector. Formally, an instance of a Wafer-to-Wafer integration problem is represented by m sets of n p-dimensional vectors. The objective is then to partition the vectors into n disjoint m-tuples, each tuple containing exactly one vector per set. An m-tuple represents a stack of wafers. Every m-tuple can be represented by a p-dimensional vector. Each coordinate is computed by performing the bitwise AND between the corresponding coordinates of the vectors that compose the m-tuple. In other words, a coordinate of the representative vector is equal to one if and only if this coordinate is equal to one in every vector composing the tuple. It follows that a dies stack is viable if and only if all the dies composing the stack are viable. The objective is then to maximize the overall number of ones of to minimize the overall number of zeros.The first part of the thesis is a theoretical one. We study the complexity of the considered versions of the problem with regards to natural parameters such as m, n, p or the number of zeros per vector. We show that these problems can encode more classical problems such as Maximum Clique, Minimum Vertex Cover or k-Dimensional Matching. This leads to several negative results from computational complexity, approximability or even parameterized complexity point of view. We also provide several positive results for some specific cases of the problem.In a second part, we focus on the practical solving of the problem. We provide and compare several Integer Linear Programming formulations. We also focus on performances of some approximation algorithms that we detailed in the theoretical part.
|
Page generated in 0.3121 seconds