181 |
TRANSLUCIDEZ DE RESINAS COMPOSTAS CONSTITUÍDAS POR DIFERENTES MONÔMEROS EM MEIOS E TEMPOS DE ARMAZENAMENTO DISTINTOS / TRANSLUCENCY OF COMPOSITE RESINS FORMED BY DIFFERENT MONOMERS IN DISTINCT MEDIA AND STORAGE S TIMESKaizer, Marina da Rosa 22 October 2010 (has links)
The aim of this study was to determine translucency of composite resins formed by different monomers when stored in two distinct environments (immersed in dis tilled
water and red wine) and periods of time (immediately after curing, 24h and 30 days). Also two methods for translucency evaluation (Translucency Parameter and
Percentage of Opacity) were compared. Three composite resins were tested: one silorane-based (Filtek P90) and two metacrilate-based (Filtek Z350 and Rok). Fourteen specimens (sp) were made for each composite using a metallic matrix with 10 mm of diameter and 2 mm of thickness. Immediately after light curing using a LED
(±800mW/cm2) for 40 seconds, sp were polished with Sof-Lex Pop On Orange Series discs and with felt and diamond paste, then sp were washed in water and dried. Immediately after polishing, the initial translucency was measured with the spectrophotometer SP60 - X-Rite using the CIE L * a * b * system. Then the sp of each composite resin was divided into two groups (n = 7) according to the storage
media. Successive readings were conducted in periods of 24 hours and 30 days. Data were submitted to ANOVA and Tukey s test (5%) to compare the means of the groups. The silorane-based composite resin showed less translucency changes than metacrilate-based resins, only when stored in wine. In deionized water the modification of P90 was greater than other resins, improving the translucency. All
composites presented more alteration when stored in wine than in water, and such variation was higher in longer periods of storage. No differences were found between
the methods of evaluation. Both methods of translucency evaluation demonstrated similar results. The translucency of composites was more affected by the red wine
than by the water and the longer the period of contact with this substance the greater the modification. The type of monomer seems to be a factor that will influence the
translucency changes of composite resins. The variation of translucency can be measured by the two tested methods / O objetivo deste trabalho foi verificar a translucidez de resinas compostas constituídas por diferentes monômeros, quando armazenadas em meios (imersos em água e em vinho tinto suave) e períodos distintos (imediatamente após a
fotoativação, 24h e 30 dias). Também foram comparados dois métodos de análise de translucidez (Parâmetro de Translucidez e o Percentual de Opacidade) das resinas compostas. Três resinas compostas foram utilizadas, uma a base de silorano (Filtek P90) e duas a base de metacrilatos (Filtek Z350 e Rok). Foram confeccionados 14 corpos-de-prova (cps) para cada resina composta, utilizando uma
matriz metálica com 10 mm de diâmetro e 2 mm de espessura. Imediatamente após a fotoativação com uma fonte de luz LED (±800mW/cm2), por 40s, os cps foram polidos com discos Sof-Lex Pop On e disco de feltro com pasta diamantada, lavados
em água e secos com gaze. A translucidez inicial dos cps foi aferida imediatamente após o polimento com o espectrofotômetro SP60 - X-Rite, utilizando o sistema CIE
L* a* b*. Os cps de cada resina composta foram divididos em dois grupos (n=7) de acordo com os meios de armazenamento. Sucessivas leituras foram realizadas nos
períodos de 24h e 30 dias. Os dados foram submetidos à Análise de Variância e Teste de Tukey (5%). A resina a base de silorano apresentou menor variação de translucidez, que as resinas a base de metacrilatos, apenas, quando armazenada
em vinho. Já em água deionizada, a variação da P90 foi maior que nas demais, ficando mais translucida. Todas as resinas apresentaram maior alteração de translucidez, quando armazenadas em vinho do que em água, e esta variação foi
maior no período de trinta dias. O tipo de monômero parece ser um fator que irá influenciar a variação da translucidez da resina composta. A variação de translucidez pôde ser aferida pelos dois métodos testados de forma semelhante.
|
182 |
Problems of optimal choice on posets and generalizations of acyclic colouringsGarrod, Bryn James January 2011 (has links)
This dissertation is in two parts, each of three chapters. In Part 1, I shall prove some results concerning variants of the 'secretary problem'. In Part 2, I shall bound several generalizations of the acyclic chromatic number of a graph as functions of its maximum degree. I shall begin Chapter 1 by describing the classical secretary problem, in which the aim is to select the best candidate for the post of a secretary, and its solution. I shall then summarize some of its many generalizations that have been studied up to now, provide some basic theory, and briefly outline the results that I shall prove. In Chapter 2, I shall suppose that the candidates come as ‘m’ pairs of equally qualified identical twins. I shall describe an optimal strategy, a formula for its probability of success and the asymptotic behaviour of this strategy and its probability of success as m → ∞. I shall also find an optimal strategy and its probability of success for the analagous version with ‘c’-tuplets. I shall move away from known posets in Chapter 3, assuming instead that the candidates come from a poset about which the only information known is its size and number of maximal elements. I shall show that, given this information, there is an algorithm that is successful with probability at least ¹/e . For posets with ‘k ≥ 2’ maximal elements, I shall prove that if their width is also ‘k’ then this can be improved to ‘k-1√1/k’ and show that no better bound of this type is possible. In Chapter 4, I shall describe the history of acyclic colourings, in which a graph must be properly coloured with no two-coloured cycle, and state some results known about them and their variants. In particular, I shall highlight a result of Alon, McDiarmid and Reed, which bounds the acyclic chromatic number of a graph by a function of its maximum degree. My results in the next two chapters are of this form. I shall consider two natural generalizations in Chapter 5. In the first, only cycles of length at least ’l’ must receive at least three colours. In the second, every cycle must receive at least ‘c’ colours, except those of length less than ‘c’, which must be multicoloured. My results in Chapter 6 generalize the concept of a cycle; it is now subgraphs with minimum degree ‘r’ that must receive at least three colours, rather than subgraphs with minimum degree two (which contain cycles). I shall also consider a natural version of this problem for hypergraphs.
|
183 |
Soluções exatas para o Problema Cromático da Galeria de Arte / Exact solutions for the Chromatic Art Gallery ProblemZambon, Mauricio Jose de Oliveira, 1990- 26 August 2018 (has links)
Orientadores: Pedro Jussieu de Rezende, Cid Carvalho de Souza / Dissertação (mestrado) - Universidade Estadual de Campinas, Instituto de Computação / Made available in DSpace on 2018-08-26T19:09:30Z (GMT). No. of bitstreams: 1
Zambon_MauricioJosedeOliveira_M.pdf: 2774684 bytes, checksum: 1d0ed1f5c1ae01b7646e4bffea6a3736 (MD5)
Previous issue date: 2014 / Resumo: Nesta dissertação, apresentamos a primeira abordagem algorítmica e os primeiros resultados experimentais da literatura para tratamento do Problema Cromático Discreto da Galeria de Arte (DCAGP). Trata-se de um problema de natureza geométrica que consiste de uma variante do clássico Problema da Galeria de Arte. Neste, deseja-se encontrar um conjunto de guardas com cardinalidade mínima que consiga vigiar toda uma dada galeria. Já no DCAGP temos por objetivo obter um conjunto de observadores que cubra a galeria e que admita uma coloração válida com o menor número de cores. Uma coloração é válida se dois observadores que veem um mesmo ponto recebem cores distintas. Abordamos a resolução deste problema através de duas abordagens: uma exata e uma heurística. Inicialmente, apresentamos uma heurística primal que fornece limitantes superiores de boa qualidade e, em seguida, um modelo de programação linear inteira para resolução exata do DCAGP. Este método foi capaz de resolver todas as instâncias de um extenso conjunto de galerias, representadas por polígonos simples aleatoriamente gerados, de até 2500 vértices, em menos de um minuto. Já num outro conjunto de instâncias onde a representação inclui polígonos com buracos e polígonos fractais de von Koch com até 800 vértices, o método encontrou soluções comprovadamente ótimas para 80% das instâncias em menos de 30 minutos. No contexto dessas soluções, discutimos o uso de lazy-constraints e de técnicas de fortalecimento do modelo, assim como uma breve análise da dificuldade das instâncias. Reportamos ainda resultados da utilização de relaxação Lagrangiana, para obtenção de bons limitantes, principalmente superiores, e também resultados obtidos por meio de uma variação da técnica relax-and-fix. Finalmente, discutimos um processo de branch-and-price para resolução exata do DCAGP / Abstract: In this dissertation, we present the first algorithmic approach and the first experimental results in the literature for solving the Discrete Chromatic Art Gallery Problem (DCAGP). This problem is geometric in nature and consists of a variation of the classic Art Gallery Problem. In the latter, we want to find a minimum cardinality guard set that is able to watch over a given gallery. On the other hand, in the DCAGP, the objective is to find a set of watchers that covers the gallery and admits a valid coloring with a minimum number of colors. A coloring is valid if two watchers that observe a same point are assigned different colors. To solve this problem we apply two approaches: an exact and a heuristic one. Firstly, we present a primal heuristic able to provide good quality upper bounds, and subsequently an integer programming model that yields exact solutions for the DCAGP. This method was able to solve all instances from an extensive set of galleries, represented by randomly generated simple polygons, of up to 2500 vertices, in less than one minute. On another set of instances, where the representation includes polygons with holes and fractal von Koch polygons, with up to 800 vertices, this method found proven optimal solutions for 80% of the instances in less than 30 minutes. In the context of these solutions, we discuss the use of lazy constraints and techniques for strengthening the model, besides a brief analysis of the hardness of the instances. Moreover, we report on results obtained through a Lagrangian relaxation, mainly as a means to obtain good upper bounds, as well as from a variation of the relax-and-fix technique. Lastly, we discuss a branch-and-price process for solving the DCAGP to exactness / Mestrado / Ciência da Computação / Mestre em Ciência da Computação
|
184 |
Zeolita (clinoptilolita) em biscoitos para cães: qualidade do produto e palatabilidade / Zeolite (clinoptilolite) in dog biscuits: product quality and palatabilityLucas Domênico Elmôr 09 September 2013 (has links)
Níveis crescentes de zeolita (clinoptilolita) - 0%; 1,5%; 3,0%; 4,5% - foram utilizados com o intuito de se avaliar a qualidade e a palatabilidade de biscoitos para cães. No âmbito da qualidade de produto foi avaliada a atividade de água, através da mensuração da umidade relativa de equilíbrio, a coloração, utilizando-se colorímetro em sistema CIEL*a*b, a textura através de texturômetro com sonda específica e a ordenação de preferência por parte dos proprietários de cães. O ensaio de palatabilidade foi realizado com 14 cães adultos, sem raça definida, machos e fêmeas, com idade média de seis anos e peso médio de 14kg. Foi utilizado delineamento inteiramente casualizado. Nos níveis de inclusão de 3% e 4,5% a zeolita diminuiu (p<0,01) os valores de atividade de água e para todos os níveis testados não houve efeito (p<0,01) na coloração de biscoitos para cães. A pressão de cisalhamento foi crescente (p<0,01) nos tratamentos 0%; 1,5% e 3,0%, respectivamente. Porém, sofreu uma queda (p<0,01) no nível de inclusão de 4,5%. Na análise sensorial os proprietários de cães preferiram (p<0,05) para o parâmetro cor o nível de 0% de inclusão de zeolita, para o odor os níveis 0% e 1,5% e para a dureza os níveis 0% e 4,5%. No ensaio de palatabilidade, tanto para primeira escolha, como para a razão de ingestão, houve diferença (p<0,01) significativa, sendo o nível de 3% de Zeolita o preferido, seguido dos níveis 4,5%, 0% e 1,5%, respectivamente. Mesmo não havendo efeito na coloração, a adição de Zeolita (clinoptilolita) altera a qualidade de biscoitos para cães, causando uma diminuição na atividade de água, nos níveis de 3,0% e 4,5% de inclusão e modificando a textura nos níveis 1,5% e 3,0%. Diferentes níveis de zeolita na composição de petiscos podem ser identificados pelos cães / Increasing levels of zeolite (clinoptilolite) - 0%, 1.5%, 3.0%, 4.5% - were used in order to evaluate the quality and palatability of dog biscuits. Concerning product quality water activity was evaluated by measuring the equilibrium relative humidity, coloring by colorimeter on the CIEL *a*b system, texture through texturometer using three point bending rig probe and ordering of preference for the dog\'s owners. The palatability test was conducted with 14 adult dogs, mixed breed, male and female, with an average age of six years, and an average weight of 14kg. The statistical design was applied in a completely randomized study. In inclusion levels of 3.0% and 4.5% of zeolite decreased (p <0.01) the values of water activity and at all levels tested did not affect (p <0.01) in staining. The shear stress is increased (p <0.01) between treatments 0%, 1.5% and 3.0%, respectively. However, has declined (p <0.01) on level of inclusion of 4.5%. In the sensory analysis dog owners preferred (p <0.05) for the color parameter 0% of inclusion, for the odor 0% and 1.5% and hardness 0% and 4.5%. In palatability test, both first choice and ratio of ingestion had significant differences (p <0.01) between the levels tested and the level of 3% zeolite was preferred, followed by 4.5%, 0 % and 1.5% levels, respectively. Even without effecting coloring, adding zeolite (clinoptilolite) alters the quality of dog biscuits, causing a decrease in water activity on levels of 3.0%, and 4.5% inclusion and changing the texture on levels 1.5% and 3.0%. Different levels of zeolite in the composition of snacks can be identified by dogs
|
185 |
Quandle coloring conditions and zeros of the Alexander polynomials of Montesinos links / カンドル彩色条件とモンテシノス絡み目のアレキサンダー多項式の零点Ishikawa, Katsumi 25 March 2019 (has links)
京都大学 / 0048 / 新制・課程博士 / 博士(理学) / 甲第21536号 / 理博第4443号 / 新制||理||1639(附属図書館) / 京都大学大学院理学研究科数学・数理解析専攻 / (主査)教授 大槻 知忠, 教授 向井 茂, 教授 小野 薫 / 学位規則第4条第1項該当 / Doctor of Science / Kyoto University / DFAM
|
186 |
Spectral Approach to Modern Algorithm DesignAkash Kumar (8802788) 06 May 2020 (has links)
<div>Spectral Methods have had huge influence of modern algorithm design. For algorithmic problems on graphs, this is done by using a deep connection between random walks and the powers of various natural matrices associated with the graph. The major contribution</div><div>of this thesis initiates attempts to recover algorithmic results in Graph Minor Theory via spectral methods.</div><div><br></div><div>We make progress towards this goal by exploring these questions in the Property Testing Model for bounded degree graphs. Our main contributions are</div><div>1. The first result gives an almost query optimal one-sided tester for the property of H-minor-freeness. Benjamini-Schramm-Shapira (STOC 2008) conjectured that for fixed H, this can be done in time O(sqrt n). Our algorithm solves this in time n^{1/2+o(1)} which nearly resolves this upto n^{o(1)} factors.</div><div><br></div><div>2. BSS also conjectured that in the two-sided model, H-minor-freeness can be tested in time poly(1/eps). We resolve this conjecture in the affirmative.</div><div><br></div><div>3.Lastly, in a previous work on the two-sided-question above, Hassidim-Kelner-Nguyen-Onak (FOCS 2009) introduced a tool they call partition oracle. They conjectured that partition oracles could be implemented in time poly(1/eps) and gave an implementation which took exp(poly(1/eps)) time. In this work, we resolve this conjecture and produce such an oracle.</div><div><br></div><div><br></div><div>Additionally, this work also presents an algorithm which can recover a planted 3-coloring in a graph with some random like properties and suggests some future research directions alongside.</div>
|
187 |
Reconfiguration and combinatorial games / Reconfiguration et jeux combinatoiresHeinrich, Marc 09 July 2019 (has links)
Cette thèse explore des problématiques liées aux jeux. Les jeux qui nous intéressent sont ceux pour lesquels il n'y a pas d'information cachée: tout les joueurs ont accès à toute l'information relative au jeu; il n'y a pas non plus d'aléa. Certains jeux de société (comme les échecs ou le go) satisfont ces propriétés et sont représentatifs des jeux que nous considérons ici. La notion de stratégie est au centre de l'étude de ces jeux. En termes simples, une stratégie est une façon de jouer qui permet de s'assurer un certain résultat. La question centrale, à la fois quand on joue à un jeu et quand on l'étudie, est le problème de trouver la 'meilleure' stratégie, qui assure la victoire du joueur qui l'applique. Dans cette thèse, nous considérons à la fois des jeux à un joueurs, appelés puzzles combinatoires, et des jeux à deux joueurs. Le Rubik's cube, Rush-Hour, et le taquin sont des exemples biens connus de puzzles combinatoires. Récemment, un certain nombre de jeux -- des jeux à un joueur notamment -- ont connu un regain d'intérêt en tant qu'objets d'étude dans un domaine plus grand appelé reconfiguration. Les puzzles que l'on vient de mentionner peuvent tous être décrit de la façon suivante: il y a un ensemble de configurations, qui représente tous les états possibles du jeu; et le joueur est autorisé à transformer une configuration en une autre à via un certain nombre de règles. Le but est d'atteindre une certaine configuration cible à partir d'une configuration initiale. Le domaine de la reconfiguration étend cette définition à des problèmes de recherche: l'ensemble des configurations devient l'ensemble des solutions à une instance d'un problème donné, et l'on se demande si l'on peut transformer une solution donnée en une autre en utilisant uniquement un ensemble de règles de transformation précises. La recherche sur ce type de problèmes a été guidée par des motivations algorithmiques: ce processus peut être vu comme un moyen d'adapter une solution déjà en place en une nouvelle solution à l'aide de petits changements locaux; et des connections avec d'autres problèmes comme la génération aléatoire, ainsi que des problèmes de physique statistique. Les jeux à deux joueurs, qui sont aussi appelés jeux combinatoires, ont été étudiés depuis le début du XXème siècle avec les travaux de Bouton, continués par Berlekamp, Conway et Guy avec le développement d'une théorie unifiant un certain nombre de jeux classiques. Ce travail se focalise sur des joueurs parfaits, c'est à dire qui choisissent toujours le coup optimal. Notre but est de caractériser lequel des deux joueurs possède une stratégie qui lui assure la victoire, quelque soient les coups de son adversaire. Cette approche est au coeur de ce qui est appelé la Théorie des Jeux Combinatoires. Une grande parties des recherche est focalisée sur des 'jeux mathématiques', qui sont des jeux inventés par des mathématiciens, souvent avec des règles très simples, et rarement connus en dehors de la recherche. La motivation principale pour étudier ces jeux est la présence de liens parfois surprenant entre ces jeux et d'autres domaines des mathématiques comme entre autre la théorie des nombres, les automates ou les systèmes dynamiques. Dans cette thèse, nous étudions ainsi des jeux à un et deux joueurs. Nous nous intéressons tout particulièrement à la complexité de ces jeux, c'est à dire évaluer à quel point il est difficile (d'un point de vue algorithmique) de calculer une stratégie gagnante. Nous nous intéressons aussi à leur structure et à certaines propriétés qu'ils peuve satisfaire. Finalement, un des outils principaux dans cette étude est la notion de graphe, et nous utilisons notamment des méthodes et des techniques provenant de théorie des graphes pour résoudre ces problèmes / This thesis explores problems related to games. The games that we consider in this study are games for which there is no hidden information: all the players have access to all the information related to the game; there is also no randomness and everything is deterministic. A few well-known board games (such as chess or go) fall in this category and are representative of the kinds of games that we consider here. Central to the study of these games is the notion of strategy, which roughly speaking, is a way of playing which ensures a certain objective. The main question of interest, when both playing and studying a game, is the problem of finding the 'best' strategy, which secures the victory for the player following it. In this thesis, we consider both one-player games, also called combinatorial puzzles, and two-player games. Examples of combinatorial puzzles include Rubik's cube, Rush-Hour, Sokoban, the 15 puzzle, or peg solitaire. Recently, some types of one-player games in particular have received a strong regain of interest as part of the larger area of reconfiguration problems. The puzzles we described above can all be described in the following way: there is a set of configurations, which represents all the possible states of the game; and the player is allowed to transform a configuration using a prescribed set of moves. Starting from an initial configuration, the goal is to reach a target configuration by a succession of valid moves. Reconfiguration extends this definition to any search problem: the set of configuration becomes the set of solutions to an instance of a given problem, and we ask whether we can transform one given solution to another using only a prescribed set of moves. Hence, in addition to the combinatorial puzzles, reconfiguration problems also include many `games' which are not played by humans anymore but are instead mathematical problems sharing a lot of similarities with combinatorial puzzles. The study of reconfiguration problems has been driven by many different motivations. It has algorithmic applications: it can be seen as a way to adapt a current solution already in place to reach a new one by only making small local changes. It is also connected to other problems such as random sampling, approximate counting or problems coming from statistical physics. It can also be used as a tool for understanding the performance of some heuristic algorithms based on local modifications of solutions such as local search. Two-player games, which are also called combinatorial games, have been studied since the beginning of the twentieth century, with the works of Bouton which were continued with the development of a nice theory by Berlekamp, Conway, and Guy, unifying a certain number of classical games. We focus in this study on perfect strategies (i.e., players always choosing the best possible move), and try to characterize who wins under perfect play for various games. This approach is at the heart of what is called Combinatorial Game Theory. Most of the research in this area is focused on `math games' which are games invented by mathematicians, often with simple rules and almost never played by humans. The main motivation for studying these games comes from the nice, and sometimes unexpected connections these games have with other areas of mathematics, such as for example number theory, automatons, or dynamical systems. In this thesis, we study one- and two-player games. The questions we consider are often related to complexity. Complexity theory consists in trying to classify problems depending on their hardness. By hardness we mean to quantify how much time it would take for a computer to solve the problem. An other aspect of this research consists in investigating structural properties that these games can satisfy. Finally, one of our main tools is the notion of graph, and we use in particular methods and techniques from graph theory to answer the different questions we just mentioned
|
188 |
Neighborhood-Restricted [≤2]-Achromatic ColoringsChandler, James D., Desormeaux, Wyatt J., Haynes, Teresa W., Hedetniemi, Stephen T. 10 July 2016 (has links)
A (closed) neighborhood-restricted [≤2]-coloring of a graph G is an assignment of colors to the vertices of G such that no more than two colors are assigned in any closed neighborhood, that is, for every vertex v in G, the vertex v and its neighbors are in at most two different color classes. The [≤2]-achromatic number is defined as the maximum number of colors in any [≤2]-coloring of G. We study the [≤2]-achromatic number. In particular, we improve a known upper bound and characterize the extremal graphs for some other known bounds.
|
189 |
Verification formelle et optimisation de l’allocation de registres / Formal Verification and Optimization of Register AllocationRobillard, Benoît 30 November 2010 (has links)
La prise de conscience générale de l'importance de vérifier plus scrupuleusement les programmes a engendré une croissance considérable des efforts de vérification formelle de programme durant cette dernière décennie. Néanmoins, le code qu'exécute l'ordinateur, ou code exécutable, n'est pas le code écrit par le développeur, ou code source. La vérification formelle de compilateurs est donc un complément indispensable à la vérification de code source.L'une des tâches les plus complexes de compilation est l'allocation de registres. C'est lors de celle-ci que le compilateur décide de la façon dont les variables du programme sont stockées en mémoire durant son exécution. La mémoire comporte deux types de conteneurs : les registres, zones d'accès rapide, présents en nombre limité, et la pile, de capacité supposée suffisamment importante pour héberger toutes les variables d'un programme, mais à laquelle l'accès est bien plus lent. Le but de l'allocation de registres est de tirer au mieux parti de la rapidité des registres, car une allocation de registres de bonne qualité peut conduire à une amélioration significative du temps d'exécution du programme.Le modèle le plus connu de l'allocation de registres repose sur la coloration de graphe d'interférence-affinité. Dans cette thèse, l'objectif est double : d'une part vérifier formellement des algorithmes connus d'allocation de registres par coloration de graphe, et d'autre part définir de nouveaux algorithmes optimisants pour cette étape de compilation. Nous montrons tout d'abord que l'assistant à la preuve Coq est adéquat à la formalisation d'algorithmes d'allocation de registres par coloration de graphes. Nous procédons ainsi à la vérification formelle en Coq d'un des algorithmes les plus classiques d'allocation de registres par coloration de graphes, l'Iterated Register Coalescing (IRC), et d'une généralisation de celui-ci permettant à un utilisateur peu familier du système Coq d'implanter facilement sa propre variante de cet algorithme au seul prix d'une éventuelle perte d'efficacité algorithmique. Ces formalisations nécessitent des réflexions autour de la formalisation des graphes d'interférence-affinité, de la traduction sous forme purement fonctionnelle d'algorithmes impératifs et de l'efficacité algorithmique, la terminaison et la correction de cette version fonctionnelle. Notre implantation formellement vérifiée de l'IRC a été intégrée à un prototype du compilateur CompCert.Nous avons ensuite étudié deux représentations intermédiaires de programmes, dont la forme SSA, et exploité leurs propriétés pour proposer de nouvelles approches de résolution optimale de la fusion, l'une des optimisations opéréeslors de l'allocation de registres dont l'impact est le plus fort sur la qualité du code compilé. Ces approches montrent que des critères de fusion tenant compte de paramètres globaux du graphe d'interférence-affinité, tels que sa largeur d'arbre, ouvrent la voie vers de nouvelles méthodes de résolution potentiellement plus performantes. / The need for trustful programs led to an increasing use of formal verication techniques the last decade, and especially of program proof. However, the code running on the computer is not the source code, i.e. the one written by the developper, since it has to betranslated by the compiler. As a result, the formal verication of compilers is required to complete the source code verication. One of the hardest phases of compilation is register allocation. Register allocation is the phase within which the compiler decides where the variables of the program are stored in the memory during its execution. The are two kinds of memory locations : a limited number of fast-access zones, called registers, and a very large but slow-access stack. The aim of register allocation is then to make a great use of registers, leading to a faster runnable code.The most used model for register allocation is the interference graph coloring one. In this thesis, our objective is twofold : first, formally verifying some well-known interference graph coloring algorithms for register allocation and, second, designing new graph-coloring register allocation algorithms. More precisely, we provide a fully formally veri ed implementation of the Iterated Register Coalescing, a very classical graph-coloring register allocation heuristics, that has been integrated into the CompCert compiler. We also studied two intermediate representations of programs used in compilers, and in particular the SSA form to design new algorithms, using global properties of the graph rather than local criteria currently used in the litterature.
|
190 |
Color stability of light-activated bleach shade compositesAl-Yakoubi, Yaser January 2010 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / This study evaluated the color stability of bleach shade composites when activated by a high-intensity quartz tungsten-halogen (QTH) light source after 1 day, 7 days, and 30 days of exposure to different conditions. The color stability of bleach shade composites depends on various factors, namely, the resin material, the shade of the resin material, the storage method, and the storage time.
|
Page generated in 0.0735 seconds