• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 88
  • 22
  • 21
  • 7
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 167
  • 36
  • 34
  • 30
  • 28
  • 25
  • 24
  • 20
  • 20
  • 18
  • 18
  • 18
  • 17
  • 17
  • 16
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
81

Parallel Scheduling in the Cloud Systems : Approximate and Exact Methods / Ordonnancement parallèle des systèmes Cloud : méthodes approchées et exactes

Hassan Abdeljabbar Hassan, Mohammed Albarra 15 December 2016 (has links)
Cette thèse porte sur la résolution exacte et heuristique de plusieurs problèmes ayant des applications dans le domaine de l'Informatique dématérialisé (cloud computing). L'Informatique dématérialisée est un domaine en plein extension qui consiste à mutualiser les machines/serveurs en définissant des machines virtuelles représentant des fractions des machines/serveurs. Il est nécessaire d'apporter des solutions algorithmiques performantes en termes de temps de calcul et de qualité des solutions. Dans cette thèse, nous nous sommes intéressés dans un premier temps au problème d'ordonnancement sur plusieurs machines (les machines virtuelles) avec contraintes de précédence, c.-à-d., que certaines tâches ne peuvent s'exécuter que si d'autres sont déjà finies. Ces contraintes représentent une subdivision des tâches en sous tâches pouvant s'exécuter sur plusieurs machines virtuelles. Nous avons proposé plusieurs algorithmes génétiques permettant de trouver rapidement une bonne solution réalisable. Nous les avons comparés avec les meilleurs algorithmes génétiques de la littérature et avons défini les types d'instances où les solutions trouvées sont meilleures avec notre algorithme. Dans un deuxième temps, nous avons modélisé ce problème à l'aide de la programmation linéaire en nombres entiers permettant de résoudre à l'optimum les plus petites instances. Nous avons proposé de nouvelles inégalités valides permettant d'améliorer les performances de notre modèle. Nous avons aussi comparé cette modélisation avec plusieurs formulations trouvées dans la littérature. Dans un troisième temps, nous avons analysé de manière approfondie la sous-structure du sous-graphe d'intervalle ne possédant pas de clique de taille donnée. Nous avons étudié le polytope associé à cette sous-structure et nous avons montré que les facettes que nous avons trouvées sont valides pour le problème d'ordonnancement sur plusieurs machines avec contraintes de précédence mais elles le sont aussi pour tout problème d'ordonnancement sur plusieurs machines. Nous avons étendu la modélisation permettant de résoudre le précédent problème afin de résoudre le problème d'ordonnancement sur plusieurs machines avec des contraintes disjonctives entre les tâches, c.-à-d., que certaines tâches ne peuvent s'exécuter en même temps que d'autres. Ces contraintes représentent le partage de ressources critiques ne pouvant pas être utilisées par plusieurs tâches. Nous avons proposé des algorithmes de séparation afin d'insérer de manière dynamique nos facettes dans la résolution du problème puis avons développé un algorithme de type Branch-and-Cut. Nous avons analysé les résultats obtenus afin de déterminer les inégalités les plus intéressantes afin de résoudre ce problème. Enfin dans le dernier chapitre, nous nous sommes intéressés au problème d'ordonnancement d'atelier généralisé ainsi que la version plus classique d'ordonnancement d'atelier (open shop). En effet, le problème d'ordonnancement d'atelier généralisé est aussi un cas particulier du problème d'ordonnancement sur plusieurs machines avec des contraintes disjonctives entre les tâches. Nous avons proposé une formulation à l'aide de la programmation mathématique pour résoudre ces deux problèmes et nous avons proposé plusieurs familles d'inégalités valides permettant d'améliorer les performances de notre algorithme. Nous avons aussi pu utiliser les contraintes définies précédemment afin d'améliorer les performances pour le problème d'ordonnancement d'atelier généralisé. Nous avons fini par tester notre modèle amélioré sur les instances classiques de la littérature pour le problème d'ordonnancement d'atelier. Nous obtenons de bons résultats permettant d'être plus rapide sur certaines instances / The Cloud Computing appears as a strong concept to share costs and resources related to the use of end-users. As a consequence, several related models exist and are widely used (IaaS, PaaS, SaaS. . .). In this context, our research focused on the design of new methodologies and algorithms to optimize performances using the scheduling and combinatorial theories. We were interested in the performance optimization of a Cloud Computing environment where the resources are heterogeneous (operators, machines, processors...) but limited. Several scheduling problems have been addressed in this thesis. Our objective was to build advanced algorithms by taking into account all these additional specificities of such an environment and by ensuring the performance of solutions. Generally, the scheduling function consists in organizing activities in a specific system imposing some rules to respect. The scheduling problems are essential in the management of projects, but also for a wide set of real systems (telecommunication, computer science, transportation, production...). More generally, solving a scheduling problem can be reduced to the organization and the synchronization of a set of activities (jobs or tasks) by exploiting the available capacities (resources). This execution has to respect different technical rules (constraints) and to provide the maximum of effectiveness (according to a set of criteria). Most of these problems belong to the NP-Hard problems class for which the majority of computer scientists do not expect the existence of a polynomial exact algorithm unless P=NP. Thus, the study of these problems is particularly interesting at the scientific level in addition to their high practical relevance. In particular, we aimed to build new efficient combinatorial methods for solving parallel-machine scheduling problems where resources have different speeds and tasks are linked by precedence constraints. In our work we studied two methodological approaches to solve the problem under the consideration : exact and meta-heuristic methods. We studied three scheduling problems, where the problem of task scheduling in cloud environment can be generalized as unrelated parallel machines, and open shop scheduling problem with different constraints. For solving the problem of unrelated parallel machines with precedence constraints, we proposed a novel genetic-based task scheduling algorithms in order to minimize maximum completion time (makespan). These algorithms combined the genetic algorithm approach with different techniques and batching rules such as list scheduling (LS) and earliest completion time (ECT). We reviewed, evaluated and compared the proposed algorithms against one of the well-known genetic algorithms available in the literature, which has been proposed for the task scheduling problem on heterogeneous computing systems. Moreover, this comparison has been extended to an existing greedy search method, and to an exact formulation based on basic integer linear programming. The proposed genetic algorithms show a good performance dominating the evaluated methods in terms of problems' sizes and time complexity for large benchmark sets of instances. We also extended three existing mathematical formulations to derive an exact solution for this problem. These mathematical formulations were validated and compared to each other by extensive computational experiments. Moreover, we proposed an integer linear programming formulations for solving unrelated parallel machine scheduling with precedence/disjunctive constraints, this model based on the intervaland m-clique free graphs with an exponential number of constraints. We developed a Branch-and-Cut algorithm, where the separation problems are based on graph algorithms. [...]
82

Extending Polyhedral Techniques towards Parallel Specifications and Approximations / Extension des Techniques Polyedriques vers les Specifications Parallelles et les Approximations

Isoard, Alexandre 05 July 2016 (has links)
Les techniques polyédriques permettent d’appliquer des analyses et transformations de code sur des structures multidimensionnelles telles que boucles imbriquées et tableaux. Elles sont en général restreintes aux programmes séquentiels dont le contrôle est affine et statique. Cette thèse consiste à les étendre à des programmes comportant par exemple des tests non analysables ou exprimant du parallélisme. Le premier résultat est l'extension de l’analyse de durée de vie et conflits mémoire, pour les scalaires et les tableaux, à des programmes à spécification parallèle ou approximée. Dans les travaux précédents sur l’allocation mémoire pour laquelle cette analyse est nécessaire, la notion de temps ordonne totalement les instructions entre elles et l’existence de cet ordre est implicite et nécessaire. Nous avons montré qu'il est possible de mener à bien de telles analyses sur un ordre partiel quelconque qui correspondra au parallélisme du programme étudié. Le deuxième résultat est d'étendre les techniques de repliement mémoire, basées sur les réseaux euclidiens, de manière à trouver automatiquement une base adéquate à partir de l'ensemble des conflits mémoire. Cet ensemble est fréquemment non convexe, cas qui était traité de façon insuffisante par les méthodes précédentes. Le dernier résultat applique les deux analyses précédentes au calcul par blocs "pipelinés" et notamment au cas de blocs de taille paramétrique. Cette situation donne lieu à du contrôle non-affine mais peut être traité de manière précise par le choix d’approximations adaptées. Ceci ouvre la voie au transfert efficace de noyaux de calculs vers des accélérateurs tels que GPU, FPGA ou autre circuit spécialisé. / Polyhedral techniques enable the application of analysis and code transformations on multi-dimensional structures such as nested loops and arrays. They are usually restricted to sequential programs whose control is both affine and static. This thesis extend them to programs involving for example non-analyzable conditions or expressing parallelism. The first result is the extension of the analysis of live-ranges and memory conflicts, for scalar and arrays, to programs with parallel or approximated specification. In previous work on memory allocation for which this analysis is required, the concept of time provides a total order over the instructions and the existence of this order is an implicit requirement. We showed that it is possible to carry out such analysis on any partial order which match the parallelism of the studied program. The second result is to extend memory folding techniques, based on Euclidean lattices, to automatically find an appropriate basis from the set of memory conflicts. This set is often non convex, case that was inadequately handled by the previous methods. The last result applies both previous analyzes to "pipelined" blocking methods, especially in case of parametric block size. This situation gives rise to non-affine control but can be processed accurately by the choice of suitable approximations. This paves the way for efficient kernel offloading to accelerators such as GPUs, FPGAs or other dedicated circuit.
83

Composição de coordenadas normais de Rieman locais e geometria poliedral em aprendizado de variedades com aplicações de teoria de folheações / Composition of local normal Riemann coordinates and polyhedral geometry in manifolds learning with applications of foliations theory

Miranda Junior, Gastão Florêncio 02 July 2015 (has links)
Submitted by Maria Cristina (library@lncc.br) on 2015-11-25T17:21:07Z No. of bitstreams: 1 Tese-Gastao-LNCC.pdf: 33936271 bytes, checksum: 63b98a5aa6d7c3c834844f4b4af76687 (MD5) / Approved for entry into archive by Maria Cristina (library@lncc.br) on 2015-11-25T17:21:19Z (GMT) No. of bitstreams: 1 Tese-Gastao-LNCC.pdf: 33936271 bytes, checksum: 63b98a5aa6d7c3c834844f4b4af76687 (MD5) / Made available in DSpace on 2015-11-25T17:21:30Z (GMT). No. of bitstreams: 1 Tese-Gastao-LNCC.pdf: 33936271 bytes, checksum: 63b98a5aa6d7c3c834844f4b4af76687 (MD5) Previous issue date: 2015-07-02 / Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (Capes) / Manifold learning techniques have been used for dimensionality reduction in applications involving pattern recognition, data mining and computer vision. This thesis describes recent works that we have done in this area as well as perspectives for future works. First, we propose a methodology called Local Riemannian Manifold Learning (LRML), which recovers the topology and geometry of the manifold using local systems of normal coordinates computed through the exponential application. The LRML strategy has the advantage of reducing the accumulation of errors during the process of manifold learning. However, the obtained parameterization can not be used as an unambiguous representation space. Furthermore, the synthesis process needs domain triangulation in the parameter space to be efficiently performed. We address this drawback of LRML using a composition procedure to structure the neighborhoods of normal coordinates building a global representation space that locally preserves radial geodesic distances. Besides, we add a geometric structure based on triangulations obtaining an efficient methodology to the synthesis process. We also explored discrete geometry concepts for generation of piecewise linear manifolds for data analysis. In the computational experiments we verify the efficiency of the LRML combined with the composition process and discrete geometry framework for the synthesis and data mining. We explored the application of foliation theory for images of human faces with multiple facial expressions. We conclude that this approach is a promising one for the study of the geometry and topology of the space of human face images. / Técnicas em aprendizado de variedades vêm sendo utilizadas para redução de dimensionalidade em aplicações envolvendo reconhecimento de padrões, mineração de dados e visão computacional. Nesta tese serão descritos trabalhos recentes que fizemos nesta área bem como perspectivas para trabalhos futuros. Primeiramente, propomos uma metodologia denominada aprendizado local de variedades Riemannianas (LRML), a qual recupera a topologia e geometria da variedade utilizando sistemas locais de coordenadas normais computadas via aplicação exponencial. A estratégia local do LRML tem a vantagem de minimizar a acumulação de erros durante o processo de reconstrução da variedade. No entanto, a parametrização obtida não pode ser utilizada como um espaço de representação sem ambiguidades. Além disso, o processo de síntese precisa de triangulação do domínio no espaço de parâmetros para ser realizada de forma eficiente. Abordamos este inconveniente do LRML usando um procedimento de composição para estruturar as vizinhanças de coordenadas normais construindo um espaço de representação que preserva localmente distâncias geodésicas radiais. Adicionamos ainda uma estrutura geométrica baseada na triangulação obtendo uma metodologia eficiente para o processo de síntese. Exploramos também a geração de variedades lineares por partes para análise de dados. Nos experimentos computacionais verificamos a eficiência do LRML combinado com as estruturas de composição e triangulação para a síntese e exploração de dados. Exploramos a aplicação da teoria de folheação para imagens de faces humanas com múltiplas expressões faciais, tal abordagem se mostrou promissora para o estudo do espaço de imagens de faces com diversas expressões faciais distintas.
84

Runtime optimization of binary through vectorization transformations / Optimisation dynamique de code binaire par des transformations vectorielles

Hallou, Nabil 18 December 2017 (has links)
Les applications ne sont pas toujours optimisées pour le matériel sur lequel elles s'exécutent, comme les logiciels distribués sous forme binaire, ou le déploiement des programmes dans des fermes de calcul. On se concentre sur la maximisation de l'efficacité du processeur pour les extensions SIMD. Nous montrons que de nombreuses boucles compilées pour x86 SSE peuvent être converties dynamiquement en versions AVX plus récentes et plus puissantes. Nous obtenons des accélérations conformes à celles d'un compilateur natif ciblant AVX. De plus, on vectorise en temps réel des boucles scalaires. Nous avons intégré des logiciels libres pour (1) transformer dynamiquement le binaire vers la forme de représentation intermédiaire, (2) abstraire et vectoriser les boucles fréquemment exécutées dans le modèle polyédrique (3) enfin les compiler. Les accélérations obtenues sont proches du nombre d'éléments pouvant être traités simultanément par l'unité SIMD. / In many cases, applications are not optimized for the hardware on which they run. This is due to backward compatibility of ISA that guarantees the functionality but not the best exploitation of the hardware. Many reasons contribute to this unsatisfying situation such as legacy code, commercial code distributed in binary form, or deployment on compute farms. Our work focuses on maximizing the CPU efficiency for the SIMD extensions. The first contribution is a lightweight binary translation mechanism that does not include a vectorizer, but instead leverages what a static vectorizer previously did. We show that many loops compiled for x86 SSE can be dynamically converted to the more recent and more powerful AVX; as well as, how correctness is maintained with regards to challenges such as data dependencies and reductions. We obtain speedups in line with those of a native compiler targeting AVX. The second contribution is a runtime auto-vectorization of scalar loops. For this purpose, we use open source frame-works that we have tuned and integrated to (1) dynamically lift the x86 binary into the Intermediate Representation form of the LLVM compiler, (2) abstract hot loops in the polyhedral model, (3) use the power of this mathematical framework to vectorize them, and (4) finally compile them back into executable form using the LLVM Just-In-Time compiler. In most cases, the obtained speedups are close to the number of elements that can be simultaneously processed by the SIMD unit. The re-vectorizer and auto-vectorizer are implemented inside a dynamic optimization platform; it is completely transparent to the user, does not require any rewriting of the binaries, and operates during program execution.
85

Um estudo do politopo e dos limites inferiores gerados pela formulaÃÃo de coloraÃÃo dos representantes / A study on the polytope and lower bounds of the representatives coloring formulation

Victor Almeida Campos 31 August 2005 (has links)
Conselho Nacional de Desenvolvimento CientÃfico e TecnolÃgico / O problema de coloraÃÃo de vÃrtices à considerado um dos modelos mais estudados em teoria dos grafos pela sua relevÃncia em campos prÃticos e teÃricos. Do ponto de vista teÃrico, o problema de coloraÃÃo à NP - DifÃcil. AlÃm disto, foi classificado entre os problemas mais difÃceis de NP, no sentido de que achar uma aproximaÃÃo para o nÃmero cromÃtico tambÃm à NP - DifÃcil. A importÃncia do problema de coloraÃÃo tem incentivado a investigar mÃtodos para encontrar limitantes inferiores prÃximos do nÃmero cromÃtico. Historicamente, os primeiros limitantes inferiores utilizados para resolvÃ-lo lidavam com cliques maximais. Mais recentemente, popularizou-se a utilizaÃÃo de relaxaÃÃes lineares de formulaÃÃes de programaÃÃo inteira. Uma formulaÃÃo que mostrou bons limitantes inferiores foi a formulaÃÃo por conjuntos independentes, cujo valor de relaxaÃÃo equivale ao nÃmero cromÃtico fracionÃrio. No presente trabalho, fazemos uma comparaÃÃo entre as formulaÃÃes de programaÃÃo inteira conhecidas para indicar a escolha pela formulaÃÃo dos representantes. Revisamos a formulaÃÃo para remover simetrias existentes e apresentamos um estudo parcial do politopo associado ao fecho convexo de suas soluÃÃes inteiras. Discutimos como à possÃvel utilizar a formulaÃÃo dos representantes para gerar limites inferiores para o nÃmero cromÃtico fracionÃrio. Realizamos a implementaÃÃo de um mÃtodo de planos de corte para aproximar o nÃmero cromÃtico fracionÃrio e mostramos que podemos gerar limitantes inferiores que normalmente nÃo diferem em mais de uma unidade. / The vertex coloring problem is one of the most studied problems in graph theory for its relevance in practical and theoretical fields. From a theoretical point of view, it is a NP-Hard problem. Moreover, it is classified among the most difficult problems of NP- Hard in the sense that finding an approximation to the chromatic number is also NP-Hard. The importance of the coloring problem motivates searching for methods to find lower bounds close to the chromatic number. Historically, the first lower bounds used were obtained from the size of maximal cliques. More recently, relaxed integer programming formulations gained more attention. A formulation which found good lower bounds was the coloring problem through stable sets whose relaxed lower bound equals the fractional chromatic number. In this work, we make a comparison between the known integer programming formulations to motivate our choice for the Representatives formulation. We revise this formulation to remove symmetry and present a partial study of the polytope associated with the convex hull of its integer solutions. We discuss how to se the Representatives formulation to get lower bounds for the fractional chromatic number and we show how to get such lower bounds that differ at most by one unit to its exact value.
86

Ne Design Methods For Polyhedral Linkages

Kiper, Gokhan 01 September 2006 (has links) (PDF)
This thesis analyses the existing types of polyhedral linkages and presents new linkage types for resizing polyhedral shapes. First, the transformation characteristics, most specifically, magnification performances of existing polyhedral linkages are given. Then, methods for synthesizing single degree-of-freedom planar polygonal linkages are described. The polygonal linkages synthesized are used as faces of polyhedral linkages. Next, the derivation of some of the existing linkages using the method given is presented. Finally, some designs of cover panels for the linkages are given. The Cardan Motion is the key point in both analyses of existing linkages and synthesis of new linkages.
87

Polyedrisierung dreidimensionaler digitaler Objekte mit Mitteln der konvexen Hülle

Schulz, Henrik 05 November 2008 (has links) (PDF)
Für die Visualisierung dreidimensionaler digitaler Objekte ist im Allgemeinen nur ihre Oberfläche von Interesse. Da von den bildgebenden Verfahren das gesamte räumliche Objekt in Form einer Volumenstruktur digitalisiert wird, muss aus den Daten die Oberfläche berechnet werden. In dieser Arbeit wird ein Algorithmus vorgestellt, der die Oberfläche dreidimensionaler digitaler Objekte, die als Menge von Voxeln gegeben sind, approximiert und dabei Polyeder erzeugt, die die Eigenschaft besitzen, die Voxel des Objektes von den Voxeln des Hintergrundes zu trennen. Weiterhin werden nicht-konvexe Objekte klassifiziert und es wird untersucht, für welche Klassen von Objekten die erzeugten Polyeder die minimale Flächenanzahl und den minimalen Oberflächeninhalt besitzen.
88

Synthesis, Characterization and Thermal Decomposition of Hybrid and Reverse Fluorosilicones

Conrad, Michael Perry Cyrus 18 February 2010 (has links)
Traditional fluorosilicones contain a siloxane backbone and pendant fluorinated group leading to low temperature ductility and excellent thermal stability. However, acidic or basic catalysts can reduce the thermal stability from a potential 350 °C to 150 °C. The predominant decomposition mechanism is through chain scission and it is hypothesized that preventing this will result in polymers with higher thermal stability. Three approaches were taken to prevent chain scission. First, a series of hybrid fluorosilicones based on (trifluorovinyl)benzene were synthesized through condensation polymerization with initial decomposition temperatures of approximately 240 °C. These were compared to similar aromatic polyethers and removal of the ether oxygen lowered the initial decomposition temperature by approximately 190 °C demonstrating the importance of this oxygen to the stability of polyethers. Second, reverse fluorosilicone (fluorinated backbone and pendant siloxane) terpolymers of chlorotrifluoroethylene (CTFE), vinyl acetate (VAc) and methacryloxypropyl-terminated polydimethylsiloxane (PDMSMA) were synthesized in supercritical CO2 (scCO2) or by emulsion polymerization. Chain scission was prevented as initial decomposition occurred between 231 and 278 °C. In both the emulsion and scCO2 cases, VAc was essential in facilitating cross-propagation between CTFE and PDMSMA and the branching was similar suggesting polymerization media does not affect polymer structure. Emulsion-based polymers had higher molar masses and thermal stability whereas comparable scCO2 polymers had higher yields and incorporated more PDMSMA. Third, a series of homo-, co-, and terpolymers of CTFE, VAc and methacryloxypropyl-terminated silsesquioxane (POSSMA) were synthesized representing the first synthesis of POSSMA containing polymers in scCO2 and demonstrating reverse fluorosilicones can be synthesized without VAc. Chain scission was prevented as initial decomposition occurred from 244 to 296 °C with thermal stability increasing with CTFE content to a limit. Decomposition of the polymers was examined and mechanism elucidated. In air, the copolymers give 40 to 47 wt% char since the silsesquioxane oxidizes to SiO2 while in N2, no residue is seen. In contrast, the terpolymers give a carbonaceous residue of approximately 20 wt% in N2. The flammability and surface properties of the polymers were examined with the terpolymers having flammability similar to p(CTFE) and surface properties comparable to p(POSSMA) giving a low-flammability, hydrophobic polymer.
89

Synthesis, Characterization and Thermal Decomposition of Hybrid and Reverse Fluorosilicones

Conrad, Michael Perry Cyrus 18 February 2010 (has links)
Traditional fluorosilicones contain a siloxane backbone and pendant fluorinated group leading to low temperature ductility and excellent thermal stability. However, acidic or basic catalysts can reduce the thermal stability from a potential 350 °C to 150 °C. The predominant decomposition mechanism is through chain scission and it is hypothesized that preventing this will result in polymers with higher thermal stability. Three approaches were taken to prevent chain scission. First, a series of hybrid fluorosilicones based on (trifluorovinyl)benzene were synthesized through condensation polymerization with initial decomposition temperatures of approximately 240 °C. These were compared to similar aromatic polyethers and removal of the ether oxygen lowered the initial decomposition temperature by approximately 190 °C demonstrating the importance of this oxygen to the stability of polyethers. Second, reverse fluorosilicone (fluorinated backbone and pendant siloxane) terpolymers of chlorotrifluoroethylene (CTFE), vinyl acetate (VAc) and methacryloxypropyl-terminated polydimethylsiloxane (PDMSMA) were synthesized in supercritical CO2 (scCO2) or by emulsion polymerization. Chain scission was prevented as initial decomposition occurred between 231 and 278 °C. In both the emulsion and scCO2 cases, VAc was essential in facilitating cross-propagation between CTFE and PDMSMA and the branching was similar suggesting polymerization media does not affect polymer structure. Emulsion-based polymers had higher molar masses and thermal stability whereas comparable scCO2 polymers had higher yields and incorporated more PDMSMA. Third, a series of homo-, co-, and terpolymers of CTFE, VAc and methacryloxypropyl-terminated silsesquioxane (POSSMA) were synthesized representing the first synthesis of POSSMA containing polymers in scCO2 and demonstrating reverse fluorosilicones can be synthesized without VAc. Chain scission was prevented as initial decomposition occurred from 244 to 296 °C with thermal stability increasing with CTFE content to a limit. Decomposition of the polymers was examined and mechanism elucidated. In air, the copolymers give 40 to 47 wt% char since the silsesquioxane oxidizes to SiO2 while in N2, no residue is seen. In contrast, the terpolymers give a carbonaceous residue of approximately 20 wt% in N2. The flammability and surface properties of the polymers were examined with the terpolymers having flammability similar to p(CTFE) and surface properties comparable to p(POSSMA) giving a low-flammability, hydrophobic polymer.
90

Efficient search-based strategies for polyhedral compilation : algorithms and experience in a production compiler

Trifunovic, Konrad 04 July 2011 (has links) (PDF)
In order to take the performance advantages of the current multicore and heterogeneous architectures the compilers are required to perform more and more complex program transformations. The search space of the possible program optimizations is huge and unstructured. Selecting the best transformation and predicting the potential performance benefits of that transformation is the major problem in today's optimizing compilers. The promising approach to handling the program optimizations is to focus on the automatic loop optimizations expressed in the polyhedral model. The current approaches for optimizing programs in the polyhedral model broadly fall into two classes. The first class of the methods is based on the linear optimization of the analytical cost function. The second class is based on the exhaustive iterative search. While the first approach is fast, it can easily miss the optimal solution. The iterative approach is more precise, but its running time might be prohibitively expensive. In this thesis we present a novel search-based approach to program transformations in the polyhedral model. The new method combines the benefits - effectiveness and precision - of the current approaches, while it tries to minimize their drawbacks. Our approach is based on enumerating the evaluations of the precise, nonlinear performance predicting cost-function. The current practice is to use the polyhedral model in the context of source-to-source compilers. We have implemented our techniques in a GCC framework that is based on the low level three address code representation. We show that the chosen level of abstraction for the intermediate representation poses scalability challenges, and we show the ways to overcome those problems. On the other hand, it is shown that the low level IR abstraction opens new degrees of freedom that are beneficial for the search-based transformation strategies and for the polyhedral compilation in general.

Page generated in 0.085 seconds