• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 88
  • 22
  • 21
  • 7
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 167
  • 36
  • 34
  • 30
  • 28
  • 25
  • 24
  • 20
  • 20
  • 18
  • 18
  • 18
  • 17
  • 17
  • 16
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
141

Topics in convex optimization: interior-point methods, conic duality and approximations

Glineur, Francois 26 January 2001 (has links)
Optimization is a scientific discipline that lies at the boundary between pure and applied mathematics. Indeed, while on the one hand some of its developments involve rather theoretical concepts, its most successful algorithms are on the other hand heavily used by numerous companies to solve scheduling and design problems on a daily basis. Our research started with the study of the conic formulation for convex optimization problems. This approach was already studied in the seventies but has recently gained a lot of interest due to development of a new class of algorithms called interior-point methods. This setting is able to exploit the two most important characteristics of convexity: - a very rich duality theory (existence of a dual problem that is strongly related to the primal problem, with a very symmetric formulation), - the ability to solve these problems efficiently, both from the theoretical (polynomial algorithmic complexity) and practical (implementations allowing the resolution of large-scale problems) point of views. Most of the research in this area involved so-called self-dual cones, where the dual problem has exactly the same structure as the primal: the most famous classes of convex optimization problems (linear optimization, convex quadratic optimization and semidefinite optimization) belong to this category. We brought some contributions in this field: - a survey of interior-point methods for linear optimization, with an emphasis on the fundamental principles that lie behind the design of these algorithms, - a computational study of a method of linear approximation of convex quadratic optimization (more precisely, the second-order cone that can be used in the formulation of quadratic problems is replaced by a polyhedral approximation whose accuracy that can be guaranteed a priori), - an application of semidefinite optimization to classification, whose principle consists in separating different classes of patterns using ellipsoids defined in the feature space (this approach was successfully applied to the prediction of student grades). However, our research focussed on a much less studied category of convex problems which does not rely on self-dual cones, i.e. structured problems whose dual is formulated very differently from the primal. We studied in particular - geometric optimization, developed in the late sixties, which possesses numerous application in the field of engineering (entropy optimization, used in information theory, also belongs to this class of problems) - l_p-norm optimization, a generalization of linear and convex quadratic optimization, which allows the formulation of constraints built around expressions of the form |ax+b|^p (where p is a fixed exponent strictly greater than 1). For each of these classes of problems, we introduced a new type of convex cone that made their formulation as standard conic problems possible. This allowed us to derive very simplified proofs of the classical duality results pertaining to these problems, notably weak duality (a mere consequence of convexity) and the absence of a duality gap (strong duality property without any constraint qualification, which does not hold in the general convex case). We also uncovered a very surprising result that stipulates that geometric optimization can be viewed as a limit case of l_p-norm optimization. Encouraged by the similarities we observed, we developed a general framework that encompasses these two classes of problems and unifies all the previously obtained conic formulations. We also brought our attention to the design of interior-point methods to solve these problems. The theory of polynomial algorithms for convex optimization developed by Nesterov and Nemirovsky asserts that the main ingredient for these methods is a computable self-concordant barrier function for the corresponding cones. We were able to define such a barrier function in the case of l_p-norm optimization (whose parameter, which is the main determining factor in the algorithmic complexity of the method, is proportional to the number of variables in the formulation and independent from p) as well as in the case of the general framework mentioned above. Finally, we contributed a survey of the self-concordancy property, improving some useful results about the value of the complexity parameter for certain categories of barrier functions and providing some insight on the reason why the most commonly adopted definition for self-concordant functions is the best possible.
142

Electrode degradation in proton exchange membrane fuel cells

Oyarce, Alejandro January 2013 (has links)
The topic of this thesis is the degradation of fuel cell electrodes in proton exchange membrane fuel cells (PEMFCs). In particular, the degradation associated with localized fuel starvation, which is often encountered during start-ups and shut-downs (SUs/SDs) of PEMFCs. At SU/SD, O2 and H2 usually coexist in the anode compartment. This situation forces the opposite electrode, i.e. the cathode, to very high potentials, resulting in the corrosion of the carbon supporting the catalyst, referred to as carbon corrosion. The aim of this thesis has been to develop methods, materials and strategies to address the issues associated to carbon corrosion in PEMFC.The extent of catalyst degradation is commonly evaluated determining the electrochemically active surface area (ECSA) of fuel cell electrode. Therefore, it was considered important to study the effect of RH, temperature and type of accelerated degradation test (ADT) on the ECSA. Low RH decreases the ECSA of the electrode, attributed to re-structuring the ionomer and loss of contact with the catalyst.In the search for more durable supports, we evaluated different accelerated degradation tests (ADTs) for carbon corrosion. Potentiostatic holds at 1.2 V vs. RHE were found to be too mild. Potentiostatic holds at 1.4 V vs. RHE were found to induce a large degree of reversibility, also attributed to ionomer re-structuring. Triangle-wave potential cycling was found to irreversibly degrade the electrode within a reasonable amount of time, closely simulating SU/SD conditions.Corrosion of carbon-based supports not only degrades the catalyst by lowering the ECSA, but also has a profound effect on the electrode morphology. Decreased electrode porosity, increased agglomerate size and ionomer enrichment all contribute to the degradation of the mass-transport properties of the cathode. Graphitized carbon fibers were found to be 5 times more corrosion resistant than conventional carbons, primarily attributed to their lower surface area. Furthermore, fibers were found to better maintain the integrity of the electrode morphology, generally showing less degradation of the mass-transport losses. Different system strategies for shut-down were evaluated. Not doing anything to the fuel cell during shut-downs is detrimental for the fuel cell. O2 consumption with a load and H2 purge of the cathode were found to give around 100 times lower degradation rates compared to not doing anything and almost 10 times lower degradation rate than a simple air purge of the anode. Finally, in-situ measurements of contact resistance showed that the contact resistance between GDL and BPP is highly dynamic and changes with operating conditions. / Denna doktorsavhandling behandlar degraderingen av polymerelektrolytbränslecellselektroder. polymerelektrolytbränslecellselektroder. Den handlar särskilt om nedbrytningen av elektroden kopplad till en degraderingsmekanism som heter ”localized fuel starvation” oftast närvarande vid uppstart och nedstängning av bränslecellen. Vid start och stopp kan syrgas och vätgas förekomma samtidigt i anoden. Detta leder till väldigt höga elektrodpotentialer i katoden. Resultatet av detta är att kolbaserade katalysatorbärare korroderar och att bränslecellens livslängd förkortas. Målet med avhandlingen har varit att utveckla metoder, material och strategier för att både öka förståelsen av denna degraderingsmekanism och för att maximera katalysatorbärarens livslängd.Ett vanligt tillvägagångsätt för att bestämma graden av katalysatorns degradering är genom mätning av den elektrokemiskt aktiva ytan hos bränslecellselektroderna. I denna avhandling har dessutom effekten av temperatur och relativ fukthalt studerats. Låga fukthalter minskar den aktiva ytan hos elektroden, vilket sannolikt orsakas av en omstrukturering av jonomeren och av kontaktförlust mellan jonomer och katalysator.Olika accelererade degraderingstester för kolkorrosion har använts. Potentiostatiska tester vid 1.2 V mot RHE visade sig vara för milda. Potentiostatiska tester vid 1.4 V mot RHE visade sig däremot medföra en hög grad av reversibilitet, som också den tros vara orsakad av en omstrukturering av jonomeren. Cykling av elektrodpotentialen degraderade istället elektroden irreversibelt, inom rimlig tid och kunde väldigt nära simulera förhållandena vid uppstart och nedstängning.Korrosionen av katalysatorbäraren medför degradering av katalysatorn och har också en stor inverkan på elektrodens morfologi. En minskad elektrodporositet, en ökad agglomeratstorlek och en anrikning av jonomeren gör att elektrodens masstransportegenskaper försämras. Grafitiska kolfibrer visade sig vara mer resistenta mot kolkorrosion än konventionella kol, främst p.g.a. deras låga ytarea. Grafitiska kolfibrer visade också en förmåga att bättre bibehålla elektrodens morfologi efter accelererade tester, vilket resulterade i lägre masstransportförluster.Olika systemstrategier för nedstängning jämfördes. Att inte göra något under nedstängning är mycket skadligt för bränslecellen. Förbrukning av syre med en last och spolning av katoden med vätgas visade 100 gånger lägre degraderingshastighet av bränslecellsprestanda jämfört med att inte göra något alls och 10 gånger lägre degraderingshastighet jämfört med spolning av anoden med luft. In-situ kontaktresistansmätningar visade att kontaktresistansen mellan bipolära plattor och GDL är dynamisk och kan ändras beroende på driftförhållandena. / <p>QC 20131104</p>
143

Morfologia e propriedades térmicas de compósitos de HDPE/EVA com POSS

Scapini, Patrícia 24 September 2008 (has links)
Neste trabalho foram estudados compósitos que apresentam como matriz polimérica uma blenda composta por polietileno de alta densidade (HDPE) e copolímero etileno acetato de vinila (EVA) e como nanocarga, silsesquioxano poliédrico oligomérico (POSS). Os compósitos foram processados em câmara de mistura fechada e caracterizados quanto às propriedades térmicas e morfológicas. Para a preparação dos compósitos foram variadas as concentrações dos componentes da blenda (0, 25, 50, 75 e 100%) e da nanocarga (0, 0,5, 1, 1,5, 2, 5 e 10%). Os resultados de processamento mostraram que o aumento da concentração de POSS na matriz polimérica provocou a agregação do mesmo na matriz polimérica. As análises de calorimetria diferencial de varredura e termogravimetria indicaram que o POSS não afetou as temperaturas de fusão, cristalização e degradação da matriz polimérica. Os resultados de raios X indicaram que a presença do EVA no compósito promoveu o aparecimento de domínios cristalinos em concentrações menores de POSS. A microscopia eletrônica de varredura indicou que as amostras com 1% de POSS apresentam distribuição homogênea na matriz polimérica. Por outro lado, ocorreu a formação de agregados nas amostras com 5% de POSS. Os valores de Tg obtidos por análise térmica dinâmico-mecânica indicaram que o POSS causou um efeito plastificante na fase HDPE e uma redução da mobilidade na fase EVA. Ocorreu um aumento nos valores de módulo de armazenamento com a incorporação de POSS na matriz polimérica. / Submitted by Marcelo Teixeira (mvteixeira@ucs.br) on 2014-05-22T19:03:21Z No. of bitstreams: 1 Dissertacao Patricia Scapini.pdf: 2020883 bytes, checksum: 2c7249d3915135dd5f3cba151cf459db (MD5) / Made available in DSpace on 2014-05-22T19:03:21Z (GMT). No. of bitstreams: 1 Dissertacao Patricia Scapini.pdf: 2020883 bytes, checksum: 2c7249d3915135dd5f3cba151cf459db (MD5) / In this study composites with a polymeric matrix comprising a blend of high density polyethylene (HDPE) and the copolymer ethylene vinyl acetate (EVA), and with polyhedral oligomeric silsesquioxane (POSS) as the nanoclay, were processed and characterized. The composites were processed in a closed mixing chamber and characterized in terms of their thermal and morphological properties. For the preparation of the composites the concentrations of the blend components (0, 25, 50, 75 and 100%) and of the nanoclay (0, 0.5, 1, 1.5, 2, 5 and 10%) were varied. The results of the processing showed that an increase in the POSS concentration in the polymeric matrix caused the aggregation of the system. The differential scanning calorimetry and thermogravimetry analyses indicated that the POSS did not affect the melt, crystallization and degradation temperatures of the polymeric matrix. The X-ray results indicated that the presence of EVA in the composite led to the appearance of crystalline domains at lower POSS concentrations. Scanning electron microscopy indicated that the samples with 1% of POSS have a homogeneous distribution in the polymeric matrix. However, the formation of aggregates occurred in samples with 5% of POSS. The Tg values obtained from the thermo dynamic mechanical analysis indicated that the POSS had a plasticizing effect on the HDPE phase and caused a reduction in the mobility of the EVA phase. There was an increase in the storage modulus values with the incorporation of POSS into the polymeric matrix.
144

O problema da subsequência comum máxima sem repetições / The repetition-free longest common subsequence problem

Christian Tjandraatmadja 26 July 2010 (has links)
Exploramos o seguinte problema: dadas duas sequências X e Y sobre um alfabeto finito, encontre uma subsequência comum máxima de X e Y sem símbolos repetidos. Estudamos a estrutura deste problema, particularmente do ponto de vista de grafos e de combinatória poliédrica. Desenvolvemos algoritmos de aproximação e heurísticas para este problema. O enfoque deste trabalho está na construção de um algoritmo baseado na técnica branch-and-cut, aproveitando-nos de um algoritmo de separação eficiente e de heurísticas e técnicas para encontrarmos uma solução ótima mais cedo. Também estudamos um problema mais fácil no qual este problema é baseado: dadas duas sequências X e Y sobre um alfabeto finito, encontre uma subsequência comum máxima de X e Y. Exploramos este problema do ponto de vista de combinatória poliédrica e descrevemos vários algoritmos conhecidos para resolvê-lo. / We explore the following problem: given two sequences X and Y over a finite alphabet, find a longest common subsequence of X and Y without repeated symbols. We study the structure of this problem, particularly from the point of view of graphs and polyhedral combinatorics. We develop approximation algorithms and heuristics for this problem. The focus of this work is in the construction of an algorithm based on the branch-and-cut technique, taking advantage of an efficient separation algorithm and of heuristics and techniques to find an optimal solution earlier. We also study an easier problem on which this problem is based: given two sequences X and Y over a finite alphabet, find a longest common subsequence of X and Y. We explore this problem from the point of view of polyhedral combinatorics and describe several known algorithms to solve it.
145

Morfologia e propriedades térmicas de compósitos de HDPE/EVA com POSS

Scapini, Patrícia 24 September 2008 (has links)
Neste trabalho foram estudados compósitos que apresentam como matriz polimérica uma blenda composta por polietileno de alta densidade (HDPE) e copolímero etileno acetato de vinila (EVA) e como nanocarga, silsesquioxano poliédrico oligomérico (POSS). Os compósitos foram processados em câmara de mistura fechada e caracterizados quanto às propriedades térmicas e morfológicas. Para a preparação dos compósitos foram variadas as concentrações dos componentes da blenda (0, 25, 50, 75 e 100%) e da nanocarga (0, 0,5, 1, 1,5, 2, 5 e 10%). Os resultados de processamento mostraram que o aumento da concentração de POSS na matriz polimérica provocou a agregação do mesmo na matriz polimérica. As análises de calorimetria diferencial de varredura e termogravimetria indicaram que o POSS não afetou as temperaturas de fusão, cristalização e degradação da matriz polimérica. Os resultados de raios X indicaram que a presença do EVA no compósito promoveu o aparecimento de domínios cristalinos em concentrações menores de POSS. A microscopia eletrônica de varredura indicou que as amostras com 1% de POSS apresentam distribuição homogênea na matriz polimérica. Por outro lado, ocorreu a formação de agregados nas amostras com 5% de POSS. Os valores de Tg obtidos por análise térmica dinâmico-mecânica indicaram que o POSS causou um efeito plastificante na fase HDPE e uma redução da mobilidade na fase EVA. Ocorreu um aumento nos valores de módulo de armazenamento com a incorporação de POSS na matriz polimérica. / In this study composites with a polymeric matrix comprising a blend of high density polyethylene (HDPE) and the copolymer ethylene vinyl acetate (EVA), and with polyhedral oligomeric silsesquioxane (POSS) as the nanoclay, were processed and characterized. The composites were processed in a closed mixing chamber and characterized in terms of their thermal and morphological properties. For the preparation of the composites the concentrations of the blend components (0, 25, 50, 75 and 100%) and of the nanoclay (0, 0.5, 1, 1.5, 2, 5 and 10%) were varied. The results of the processing showed that an increase in the POSS concentration in the polymeric matrix caused the aggregation of the system. The differential scanning calorimetry and thermogravimetry analyses indicated that the POSS did not affect the melt, crystallization and degradation temperatures of the polymeric matrix. The X-ray results indicated that the presence of EVA in the composite led to the appearance of crystalline domains at lower POSS concentrations. Scanning electron microscopy indicated that the samples with 1% of POSS have a homogeneous distribution in the polymeric matrix. However, the formation of aggregates occurred in samples with 5% of POSS. The Tg values obtained from the thermo dynamic mechanical analysis indicated that the POSS had a plasticizing effect on the HDPE phase and caused a reduction in the mobility of the EVA phase. There was an increase in the storage modulus values with the incorporation of POSS into the polymeric matrix.
146

Effective Automatic Computation Placement and Data Allocation for Parallelization of Regular Programs

Chandan, G January 2014 (has links) (PDF)
Scientific applications that operate on large data sets require huge amount of computation power and memory. These applications are typically run on High Performance Computing (HPC) systems that consist of multiple compute nodes, connected over an network interconnect such as InfiniBand. Each compute node has its own memory and does not share the address space with other nodes. A significant amount of work has been done in past two decades on parallelizing for distributed-memory architectures. A majority of this work was done in developing compiler technologies such as high performance Fortran (HPF) and partitioned global address space (PGAS). However, several steps involved in achieving good performance remained manual. Hence, the approach currently used to obtain the best performance is to rely on highly tuned libraries such as ScaLAPACK. The objective of this work is to improve automatic compiler and runtime support for distributed-memory clusters for regular programs. Regular programs typically use arrays as their main data structure and array accesses are affine functions of outer loop indices and program parameters. A lot of scientific applications such as linear-algebra kernels, stencils, partial differential equation solvers, data-mining applications and dynamic programming codes fall in this category. In this work, we propose techniques for finding computation mapping and data allocation when compiling regular programs for distributed-memory clusters. Techniques for transformation and detection of parallelism, relying on the polyhedral framework already exist. We propose automatic techniques to determine computation placements for identified parallelism and allocation of data. We model the problem of finding good computation placement as a graph partitioning problem with the constraints to minimize both communication volume and load imbalance for entire program. We show that our approach for computation mapping is more effective than those that can be developed using vendor-supplied libraries. Our approach for data allocation is driven by tiling of data spaces along with a compiler assisted runtime scheme to allocate and deallocate tiles on-demand and reuse them. Experimental results on some sequences of BLAS calls demonstrate a mean speedup of 1.82× over versions written with ScaLAPACK. Besides enabling weak scaling for distributed memory, data tiling also improves locality for shared-memory parallelization. Experimental results on a 32-core shared-memory SMP system shows a mean speedup of 2.67× over code that is not data tiled.
147

A framework for efficient execution on GPU and CPU+GPU systems / Framework pour une exécution efficace sur systèmes GPU et CPU+GPU

Dollinger, Jean-François 01 July 2015 (has links)
Les verrous technologiques rencontrés par les fabricants de semi-conducteurs au début des années deux-mille ont abrogé la flambée des performances des unités de calculs séquentielles. La tendance actuelle est à la multiplication du nombre de cœurs de processeur par socket et à l'utilisation progressive des cartes GPU pour des calculs hautement parallèles. La complexité des architectures récentes rend difficile l'estimation statique des performances d'un programme. Nous décrivons une méthode fiable et précise de prédiction du temps d'exécution de nids de boucles parallèles sur GPU basée sur trois étapes : la génération de code, le profilage offline et la prédiction online. En outre, nous présentons deux techniques pour exploiter l'ensemble des ressources disponibles d'un système pour la performance. La première consiste en l'utilisation conjointe des CPUs et GPUs pour l'exécution d'un code. Afin de préserver les performances il est nécessaire de considérer la répartition de charge, notamment en prédisant les temps d'exécution. Le runtime utilise les résultats du profilage et un ordonnanceur calcule des temps d'exécution et ajuste la charge distribuée aux processeurs. La seconde technique présentée met le CPU et le GPU en compétition : des instances du code cible sont exécutées simultanément sur CPU et GPU. Le vainqueur de la compétition notifie sa complétion à l'autre instance, impliquant son arrêt. / Technological limitations faced by the semi-conductor manufacturers in the early 2000's restricted the increase in performance of the sequential computation units. Nowadays, the trend is to increase the number of processor cores per socket and to progressively use the GPU cards for highly parallel computations. Complexity of the recent architectures makes it difficult to statically predict the performance of a program. We describe a reliable and accurate parallel loop nests execution time prediction method on GPUs based on three stages: static code generation, offline profiling, and online prediction. In addition, we present two techniques to fully exploit the computing resources at disposal on a system. The first technique consists in jointly using CPU and GPU for executing a code. In order to achieve higher performance, it is mandatory to consider load balance, in particular by predicting execution time. The runtime uses the profiling results and the scheduler computes the execution times and adjusts the load distributed to the processors. The second technique, puts CPU and GPU in a competition: instances of the considered code are simultaneously executed on CPU and GPU. The winner of the competition notifies its completion to the other instance, implying the termination of the latter.
148

XFOR (Multifor) : A new programming structure to ease the formulation of efficient loop optimizations / XFOR (Multifor) : nouvelle structure de programmation pour faciliter la formulation des optimisations efficaces de boucles

Fassi, Imen 27 November 2015 (has links)
Nous proposons une nouvelle structure de programmation appelée XFOR (Multifor), dédiée à la programmation orientée réutilisation de données. XFOR permet de gérer simultanément plusieurs boucles "for" ainsi que d’appliquer/composer des transformations de boucles d’une façon intuitive. Les expérimentations ont montré des accélérations significatives des codes XFOR par rapport aux codes originaux, mais aussi par rapport au codes générés automatiquement par l’optimiseur polyédrique de boucles Pluto. Nous avons mis en œuvre la structure XFOR par le développement de trois outils logiciels: (1) un compilateur source-à-source nommé IBB, qui traduit les codes XFOR en un code équivalent où les boucles XFOR ont été remplacées par des boucles for sémantiquement équivalentes. L’outil IBB bénéficie également des optimisations implémentées dans le générateur de code polyédrique CLooG qui est invoqué par IBB pour générer des boucles for à partir d’une description OpenScop; (2) un environnement de programmation XFOR nommé XFOR-WIZARD qui aide le programmeur dans la ré-écriture d’un programme utilisant des boucles for classiques en un programme équivalent, mais plus efficace, utilisant des boucles XFOR; (3) un outil appelé XFORGEN, qui génère automatiquement des boucles XFOR à partir de toute représentation OpenScop de nids de boucles transformées générées automatiquement par un optimiseur automatique. / We propose a new programming structure named XFOR (Multifor), dedicated to data-reuse aware programming. It allows to handle several for-loops simultaneously and map their respective iteration domains onto each other. Additionally, XFOR eases loop transformations application and composition. Experiments show that XFOR codes provides significant speed-ups when compared to the original code versions, but also to the Pluto optimized versions. We implemented the XFOR structure through the development of three software tools: (1) a source-to-source compiler named IBB for Iterate-But-Better!, which automatically translates any C/C++ code containing XFOR-loops into an equivalent code where XFOR-loops have been translated into for-loops. IBB takes also benefit of optimizations implemented in the polyhedral code generator CLooG which is invoked by IBB to generate for-loops from an OpenScop specification; (2) an XFOR programming environment named XFOR-WIZARD that assists the programmer in re-writing a program with classical for-loops into an equivalent but more efficient program using XFOR-loops; (3) a tool named XFORGEN, which automatically generates XFOR-loops from any OpenScop representation of transformed loop nests automatically generated by an automatic optimizer.
149

Graph colorings and digraph subdivisions / Colorações de grafos e subdivisões de digrafos

Phablo Fernando Soares Moura 30 March 2017 (has links)
The vertex coloring problem is a classic problem in graph theory that asks for a partition of the vertex set into a minimum number of stable sets. This thesis presents our studies on three vertex (re)coloring problems on graphs and on a problem related to a long-standing conjecture on subdivision of digraphs. Firstly, we address the convex recoloring problem in which an arbitrarily colored graph G is given and one wishes to find a minimum weight recoloring such that each color class induces a connected subgraph of G. We show inapproximability results, introduce an integer linear programming (ILP) formulation that models the problem and present some computational experiments using a column generation approach. The k-fold coloring problem is a generalization of the classic vertex coloring problem and consists in covering the vertex set of a graph by a minimum number of stable sets in such a way that every vertex is covered by at least k (possibly identical) stable sets. We present an ILP formulation for this problem and show a detailed polyhedral study of the polytope associated with this formulation. The last coloring problem studied in this thesis is the proper orientation problem. It consists in orienting the edge set of a given graph so that adjacent vertices have different in-degrees and the maximum in-degree is minimized. Clearly, the in-degrees induce a partition of the vertex set into stable sets, that is, a coloring (in the conventional sense) of the vertices. Our contributions in this problem are on hardness and upper bounds for bipartite graphs. Finally, we study a problem related to a conjecture of Mader from the eighties on subdivision of digraphs. This conjecture states that, for every acyclic digraph H, there exists an integer f(H) such that every digraph with minimum out-degree at least f(H) contains a subdivision of H as a subdigraph. We show evidences for this conjecture by proving that it holds for some particular classes of acyclic digraphs. / O problema de coloração de grafos é um problema clássico em teoria dos grafos cujo objetivo é particionar o conjunto de vértices em um número mínimo de conjuntos estáveis. Nesta tese apresentamos nossas contribuições sobre três problemas de coloração de grafos e um problema relacionado a uma antiga conjectura sobre subdivisão de digrafos. Primeiramente, abordamos o problema de recoloração convexa no qual é dado um grafo arbitrariamente colorido G e deseja-se encontrar uma recoloração de peso mínimo tal que cada classe de cor induza um subgrafo conexo de G. Mostramos resultados sobre inaproximabilidade, introduzimos uma formulação linear inteira que modela esse problema, e apresentamos alguns resultados computacionais usando uma abordagem de geração de colunas. O problema de k-upla coloração é uma generalização do problema clássico de coloração de vértices e consiste em cobrir o conjunto de vértices de um grafo com uma quantidade mínima de conjuntos estáveis de tal forma que cada vértice seja coberto por pelo menos k conjuntos estáveis (possivelmente idênticos). Apresentamos uma formulação linear inteira para esse problema e fazemos um estudo detalhado do politopo associado a essa formulação. O último problema de coloração estudado nesta tese é o problema de orientação própria. Ele consiste em orientar o conjunto de arestas de um dado grafo de tal forma que vértices adjacentes possuam graus de entrada distintos e o maior grau de entrada seja minimizado. Claramente, os graus de entrada induzem uma partição do conjunto de vértices em conjuntos estáveis, ou seja, induzem uma coloração (no sentido convencional) dos vértices. Nossas contribuições nesse problema são em complexidade computacional e limitantes superiores para grafos bipartidos. Finalmente, estudamos um problema relacionado a uma conjectura de Mader, dos anos oitenta, sobre subdivisão de digrafos. Esta conjectura afirma que, para cada digrafo acíclico H, existe um inteiro f(H) tal que todo digrafo com grau mínimo de saída pelo menos f(H) contém uma subdivisão de H como subdigrafo. Damos evidências para essa conjectura mostrando que ela é válida para classes particulares de digrafos acíclicos.
150

[pt] OTIMIZAÇÃO TOPOLÓGICA USANDO MALHAS POLIÉDRICAS / [en] TOPOLOGY OPTIMIZATION USING POLYHEDRAL MESHES

22 February 2019 (has links)
[pt] A otimização topológica tem se desenvolvido bastante e possui potencial para revolucionar diversas áreas da engenharia. Este método pode ser implementado a partir de diferentes abordagens, tendo como base o Método dos Elementos Finitos. Ao se utilizar uma abordagem baseada no elemento, potencialmente, cada elemento finito pode se tornar um vazio ou um sólido, e a cada elemento do domínio é atribuído uma variável de projeto, constante, denominada densidade. Do ponto de vista Euleriano, a topologia obtida é um subconjunto dos elementos iniciais. No entanto, tal abordagem está sujeita a instabilidades numéricas, tais como conexões de um nó e rápidas oscilações de materiais do tipo sólido-vazio (conhecidas como instabilidade de tabuleiro). Projetos indesejáveis podem ser obtidos quando elementos de baixa ordem são utilizados e métodos de regularização e/ou restrição não são aplicados. Malhas poliédricas não estruturadas naturalmente resolvem esses problemas e oferecem maior flexibilidade na discretização de domínios não Cartesianos. Neste trabalho investigamos a otimização topológica em malhas poliédricas por meio de um acoplamento entre malhas. Primeiramente, as malhas poliédricas são geradas com base no conceito de diagramas centroidais de Voronoi e posteriormente otimizadas para uso em análises de elementos finitos. Demonstramos que o número de condicionamento do sistema de equações associado pode ser melhorado ao se minimizar uma função de energia relacionada com a geometria dos elementos. Dada a qualidade da malha e o tamanho do problema, diferentes tipos de resolvedores de sistemas de equações lineares apresentam diferentes desempenhos e, portanto, ambos os resolvedores diretos e iterativos são abordados. Em seguida, os poliedros são decompostos em tetraedros por um algoritmo específico de acoplamento entre as malhas. A discretização em poliedros é responsável pelas variáveis de projeto enquanto a malha tetraédrica, obtida pela subdiscretização da poliédrica, é utilizada nas análises via método dos elementos finitos. A estrutura modular, que separa as rotinas e as variáveis usadas nas análises de deslocamentos das usadas no processo de otimização, tem se mostrado promissora tanto na melhoria da eficiência computacional como na qualidade das soluções que foram obtidas neste trabalho. Os campos de deslocamentos e as variáveis de projeto são relacionados por meio de um mapeamento. A arquitetura computacional proposta oferece uma abordagem genérica para a solução de problemas tridimensionais de otimização topológica usando poliedros, com potencial para ser explorada em outras aplicações que vão além do escopo deste trabalho. Finalmente, são apresentados diversos exemplos que demonstram os recursos e o potencial da abordagem proposta. / [en] Topology optimization has had an impact in various fields and has the potential to revolutionize several areas of engineering. This method can be implemented based on the finite element method, and there are several approaches of choice. When using an element-based approach, every finite element is a potential void or actual material, whereas every element in the domain is assigned to a constant design variable, namely, density. In an Eulerian setting, the obtained topology consists of a subset of initial elements. This approach, however, is subject to numerical instabilities such as one-node connections and rapid oscillations of solid and void material (the so-called checkerboard pattern). Undesirable designs might be obtained when standard low-order elements are used and no further regularization and/or restrictions methods are employed. Unstructured polyhedral meshes naturally address these issues and offer fl exibility in discretizing non-Cartesians domains. In this work we investigate topology optimization on polyhedra meshes through a mesh staggering approach. First, polyhedra meshes are generated based on the concept of centroidal Voronoi diagrams and further optimized for finite element computations. We show that the condition number of the associated system of equations can be improved by minimizing an energy function related to the element s geometry. Given the mesh quality and problem size, different types of solvers provide different performances and thus both direct and iterative solvers are addressed. Second, polyhedrons are decomposed into tetrahedrons by a tailored embedding algorithm. The polyhedra discretization carries the design variable and a tetrahedra subdiscretization is nested within the polyhedra for finite element analysis. The modular framework decouples analysis and optimization routines and variables, which is promising for software enhancement and for achieving high fidelity solutions. Fields such as displacement and design variables are linked through a mapping. The proposed mapping-based framework provides a general approach to solve three-dimensional topology optimization problems using polyhedrons, which has the potential to be explored in applications beyond the scope of the present work. Finally, the capabilities of the framework are evaluated through several examples, which demonstrate the features and potential of the proposed approach.

Page generated in 0.076 seconds