• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 29
  • 8
  • 3
  • 2
  • 2
  • 1
  • Tagged with
  • 52
  • 6
  • 5
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

[en] ADAPTIVE RELAXED SYNCHRONIZATION THROUGH THE USE OF SUPERVISED LEARNING METHODS / [pt] RELAXAMENTO ADAPTATIVO DA SINCRONIZAÇÃO ATRAVÉS DO USO DE MÉTODOS DE APRENDIZAGEM SUPERVISIONADA

ANDRE LUIS CAVALCANTI BUENO 31 July 2018 (has links)
[pt] Sistemas de computação paralelos vêm se tornando pervasivos, sendo usados para interagir com o mundo físico e processar uma grande quantidade de dados de várias fontes. É essencial, portanto, a melhora contínua do desempenho computacional para acompanhar o ritmo crescente da quantidade de informações que precisam ser processadas. Algumas dessas aplicações admitem uma menor qualidade no resultado final em troca do aumento do desempenho de execução. Este trabalho tem por objetivo avaliar a viabilidade de usar métodos de aprendizagem supervisionada para garantir que a técnica de Sincronização Relaxada, utilizada para o aumento do desempenho de execução, forneça resultados dentro de limites aceitáveis de erro. Para isso, criamos uma metodologia que utiliza alguns dados de entrada para montar casos de testes que, ao serem executados, irão fornecer valores representativos de entrada para o treinamento de métodos de aprendizagem supervisionada. Dessa forma, quando o usuário utilizar a sua aplicação (no mesmo ambiente de treinamento) com uma nova entrada, o algoritmo de classificação treinado irá sugerir o fator de relaxamento de sincronização mais adequado à tripla aplicação/entrada/ambiente de execução. Utilizamos essa metodologia em algumas aplicações paralelas bem conhecidas e mostramos que, aliando a Sincronização Relaxada a métodos de aprendizagem supervisionada, foi possível manter a taxa de erro máximo acordada. Além disso, avaliamos o ganho de desempenho obtido com essa técnica para alguns cenários em cada aplicação. / [en] Parallel computing systems have become pervasive, being used to interact with the physical world and process a large amount of data from various sources. It is essential, therefore, the continuous improvement of computational performance to keep up with the increasing rate of the amount of information that needs to be processed. Some of these applications admit lower quality in the final result in exchange for increased execution performance. This work aims to evaluate the feasibility of using supervised learning methods to ensure that the Relaxed Synchronization technique, used to increase execution performance, provides results within acceptable limits of error. To do so, we have created a methodology that uses some input data to assemble test cases that, when executed, will provide input values for the training of supervised learning methods. This way, when the user uses his/her application (in the same training environment) with a new input, the trained classification algorithm will suggest the relax synchronization factor that is best suited to the triple application/input/execution environment. We used this methodology insome well-known parallel applications and showed that, by combining Relaxed Synchronization with supervised learning methods, it was possible to maintain the maximum established error rate. In addition, we evaluated the performance gain obtained with this technique for a number of scenarios in each application.
42

Uma metodologia baseada na lógica linear para análise de processos de workflow interorganizacionais

Passos, Lígia Maria Soares 22 February 2016 (has links)
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior / This work formalizes four methods based on Linear Logic for the verification of interorganizational workflow processes modelled by Interorganizational Workflow nets, which are Petri nets that model such processes. The first method is related to the verification of the Soundness criteria for interorganizational workflow processes. The method is based on the construction and analysis of Linear Logic proof trees, which represent the local processes as much as they do the global processes. The second and third methods are related, respectively to Soundness criteria verification, Relaxed Soundness and Weak Soundness for the interorganizational workflow processes. These are obtained through the analysis of reutilized Linear Logic proof trees that have been constructed for the verification of the Soundness criteria. However, the fourth method has the objective of detecting the deadlock free scenarios in interorganizational workflow and is based on the construction and analysis of Linear Logic proof trees, which initially takes into consideration the local processes and communication between such, and thereafter the candidate scenarios. A case study is carried out in the context of a Web services composition check, since there is a close correlation between the modelling of the interorganizational workflow process and a Web services composition. Therefore, the four methods proposed in the interorganizational workflow process context, are applied to a Web services composition. The evaluation of the obtained results shows that the reutilization of Linear Logic proof trees initially constructed for verifying the Soundness criteria, in fact occurs in the context of verifying the Relaxed Soundness andWeak Soundness criteria. In addition, the evaluation shows how the Linear Logic sequents and their proof trees explicitly show the possibilities for existing collaborations in a Web service composition. An evaluation that takes into account the number of constructed linear logic proof trees shows that this number can be significantly reduced in the deadlock-freeness scenarios detection method. An approach for resource planning based on the symbolic date calculation, which considers data extracted from Linear Logic proof trees is presented and validated through simulations performed on the CPN tools simulator. Two approaches for the monitoring of deadlockfreeness scenarios are introduced and show how data obtained from the Linear Logic proof trees can be used to guide the execution of such scenarios. / Este trabalho formaliza quatro métodos baseados na Lógica Linear para verificação de processos de workflow interorganizacionais modelados por WorkFlow nets interorganizacionais, que são redes de Petri que modelam tais processos. O primeiro método está relacionado com a verificação do critério de correção Soundness para processos de workflow interorganizacionais. O método é baseado na construção e análise de árvores de prova da Lógica Linear que representam tanto os processos locais quanto o processo global. O segundo e terceiro métodos estão relacionados, respectivamente, com a verificação dos critérios de correção Relaxed Soundness e Weak Soundness para processos de workflow interorganizacionais, e são obtidos através da análise de árvores de prova da Lógica Linear reutilizadas, construídas para a prova do critério de correção Soundness. Já o quarto método tem por objetivo a detecção dos cenários livres de deadlock em processos de workflow interorganizacionais e é baseado na construção e análise de árvores de prova da Lógica Linear que consideram, inicialmente, os processos locais e as comunicações entre estes e, posteriormente, os cenários candidatos. Um estudo de caso é realizado no contexto da verificação de composições de serviços Web, uma vez que há uma relação estreita entre a modelagem de um processo de workflow interorganizacional e uma composição de serviços Web. Assim, os quatro métodos propostos no contexto dos processos de workflow interorganizacionais são aplicados a uma composição de serviços Web. A avaliação dos resultados mostra que o reuso de árvores de prova da Lógica Linear construídas inicialmente para a prova do critério de correção Soundness de fato ocorre no contexto da verificação dos critérios de correção Relaxed Soundness e Weak Soundness. Além disso, a avaliação mostra como os sequentes da Lógica Linear e suas árvores de prova explicitam as possibilidades de colaboração existentes em uma composição de serviços Web. Uma avaliação que leva em conta o número de árvores de prova da Lógica Linear construídas mostra que este número pode ser significativamente reduzido no método para detecção de cenários livres de deadlock. Uma abordagem para planejamento de recursos, baseada no cálculo de datas simbólicas, que considera dados extraídos de árvores de prova da Lógica Linear, é apresentada e validada através de simulações realizadas no simulador CPN Tools. Duas abordagens para a monitoração dos cenários livres de deadlock são introduzidas e mostram como dados obtidos nas árvores de prova da Lógica Linear podem ser utilizados para guiar a execução de tais cenários. / Doutor em Ciência da Computação
43

Evolutionary mechanisms of plant adaptation illustrated by cytochrome P450 genes under purifying or relaxed selection / Mécanismes évolutifs de l'adaptation des plantes illustrés par les gènes de P450s sous sélection purifiante ou pression de sélection relâchée

Liu, Zhenhua 21 March 2014 (has links)
Les plantes produisent une remarquable diversité de métabolites pour faire face aux contraintes d’un environnement en constante fluctuation. Cependant la manière dont les plantes ont atteint un tel degré de complexité métabolique et les forces responsables de cette diversité chimique reste largement incomprise. On considère généralement que le mécanisme de duplication des gènes contribue pour une grande part à l’évolution naturelle. En absence de transfert horizontal, les gènes d’évolution récente se cantonnent généralement chez quelques espèces et sont soumis à une évolution rapide, alors que les gènes conservés et plus anciens ont une distribution beaucoup plus large et sont porteurs de fonctions essentielles. Il est donc intéressant d’étudier l’adaptation des plantes en analysant parallèlement les gènes qui présentent soit une large distribution taxonomique, soit une distribution plus restreinte, de type lignée-spécifique. Les cytochromes P450 (CYP) constituent l’une des plus vastes familles de protéines chez les plantes, présentant des phylogénies très conservées ou très branchées qui illustrent la plasticité métabolique et la diversité chimique. Pour illustrer l’évolution des fonctions des cytochromes P450 dans le métabolisme végétal, nous avons sélectionné trois gènes, l’un très conservé au cours de l’évolution, CYP715A1 et les deux autres, CYP98A8 et CYP98A9, très récemment spécialisés de manière lignée spécifique chez les Brassicaceae. Les gènes appartenant à la famille CYP715 ont évolué avant la divergence entre gymnospermes et angiospermes, et sont le plus souvent présent en copie unique dans les génomes végétaux. Ceci suggère que leur fonction est essentielle et très conservée chez les plantes à graines (spermaphytes). Sur la base d’une analyse transcriptionnelle et de l’expression du gène GUS sous le contrôle du promoteur de CYP715A1, il est apparu que ce gène est spécifiquement exprimé au cours du développement floral, dans les cellules tapétales des jeunes boutons floraux ainsi que dans les filaments lors de l’anthèse. CYP715A1 est également fortement induit dans les cellules du péricycle de la zone d’élongation racinaire en réponse au stress salin. L’induction par le sel nécessite une région promotrice située entre 2 et 3 kb en amont de la région codante (i.e ; codon START), ce qui suggère la présence d’un facteur cis à cet endroit. Afin de déterminer la fonction de CYP715A1 chez Arabidopsis thaliana, j’ai identifié deux mutants d’insertion de T-DNA par génotypage et complémenté ces mutants avec le gène natif. La perte de fonction de CYP715A1 n’a pas d’impact sur la croissance et la fertilité de la plante en conditions de laboratoire. Cependant, une analyse par microscopie électronique en transmission montre un phénotype d’intine ondulée. La perte de fonction du gène CYP715A1 a également entraîné une réduction de la taille des pétales et un défaut d’anthèse. [...] / Plants produce a remarkable diversity of secondary metabolites to face continually challenging and fluctuating environmental constraints. However, how plants have reached such a high degree of metabolic complexity and what are the evolutionary forces responsible for this chemodiversity still remain largely unclarified. Gene evolution based on gene birth and extinction has been reported to nicely reflect the natural evolution. Without horizontal gene transfer, young genes are often restricted to a few species and have undergone rapid evolution, whereas old genes can be broadly distributed and are always indicative of essential housekeeping functions. It is thus of interest to study plant adaptation with parallel focus on both taxonomically widespread and lineage-specific genes. P450s are one of the largest protein families in plants, featuring both conserved and branched phylogenies. Examples of P450 properties reflecting metabolic versatility, chemodiversity and thus plant adaptation have been reported. To illustrate evolution of P450 functions in plant metabolism, we selected two P450 genes, one evolutionary conserved CYP715A1 and the second a recently specialized lineage-specific gene CYP98A9 in Arabidopsis thaliana.CYP715s evolved before the divergence between gymnosperms and angiosperms and are present in single copy in most sequenced plant genomes, suggesting an essential housekeeping function highly conserved across seed plants. Based on transcriptome analysis and promoter-driven GUS expression, CYP715A1 is selectively expressed in tapetal cells of young buds and filaments of open flowers during flower development. In addition, CYP715A1 is highly induced in the pericycle cells of the root elongation zone upon salt stress. The salt induction relies on the 2-3kb region of CYP715A1 promoter, suggesting some salt-response elements may exist in this area. To characterize the function of CYP715A1 in Arabidopsis, I identified two T-DNA insertion mutants by genotyping and confirmed by complementation with native CYP715A1 gene. Loss of function of CYP715A1 has no impact on plant growth and fertility in laboratory conditions. However, transmission electron microscopy (TEM) analysis has shown constant undulated intine phenotype in two knockout mutants and also the petal growth is significantly inhibited. These two phenotypes nicely match the native expression pattern of CYP715A1. Gene co-expression analysis suggests involvement of CYP715A1 in gibberellin (GA) metabolism under salt treatment. GAs profiling on mutant flowers also indicates reduced accumulation specific GAs. Unfortunately, no significant phenotype either related to root growth or root architecture under salt treatment can be observed. Recombinant expression of the CYP715A1 enzyme in yeast so far does not allow confirming GAmetabolism. However, metabolic profiling of inflorescences in mutants and over-expression lines, together with transcriptome analysis of the loss of function cyp715a1 mutants strongly support a CYP715A1 role in signaling, hormone homeostasis and volatile emission in agreement with the purifying selection leading to gene conservation observed in spermatophytes.[...]
44

Stanovení zbytkové napjatosti metodou vrtání otvoru s využitím MKP / Assesment of residual stress with drilling hole method using FEM

Civín, Adam January 2008 (has links)
Residual state of stress in structural materials affect positively or negatively behaviour of component parts. The goal of this scope is not to deal with possible process of creating residual stresses neither about elimination of residual stress, but is focused how to determine magnitude of residual stress by hole-drilling method. We need to know magnitude and direction (angular orientation) of principal stresses to determine how residual state of stress affects behaviour of specimen. The most widely used modern technique for measuring residual stresses is hole-drilling strain-gage method. Hole-drilling method is in scope of this paper and is restricted only for measuring uniform residual stresses of steel specimens with finite dimensions. Structural, linear, elastic and isotropic material model is used with material properties =0,3 and E=2,1[10]^5 MPa. For correct application of this method we need to determine calibration coefficients “a“ and “b“ first. These coefficients are used to determine magnitude and direction of residual stresses in specific depth and diameter of drilled hole for materials with finite dimensions. Geometry and shape of model is simply represented by block with planar faces. Note that numerical determination of calibration coefficients is useful only for one type of strain gauge rosette RY 61 S. Main goals of this thesis are motivation and request to clearly report effectiveness, accuracy and applicability of calibration coefficients in relation to thickness of specimen, dimensions of drilled hole, condition of “through” or “blind” hole and number of drilled increments. High quality and accuracy of created numerical model is necessary too. Numerical simulation of residual stresses by MKP needs to be done to obtain requested results. All results are presented by 3D, 2D graphs and tables and compared with analytical results or results from other authors. Although is this publication focused on numerical modeling using FEM, hole-drilling method has many significant restrictions. The most substantial of them is influence of eccentricity of drilled hole, creation of stress concentration near drilled area and subsequent plastification, influence of geometrical inaccuracy of hole, etc. All these aspects have significant influence of determining calibration coefficients and can not be included into numerical simulation. These problems are closely discussed in background research. All obtained results should be helpful for practical use of calculated calibration coefficients to determine uniform residual stresses of specimens with various thickness and drilled hole. All these results are also applicable only for one type of strain gauge rosette, which is RY 61 S.
45

Search and Aggregation in Big Graphs / Recherche et agrégation dans les graphes massifs

Habi, Abdelmalek 26 November 2019 (has links)
Ces dernières années ont connu un regain d'intérêt pour l'utilisation des graphes comme moyen fiable de représentation et de modélisation des données, et ce, dans divers domaines de l'informatique. En particulier, pour les grandes masses de données, les graphes apparaissent comme une alternative prometteuse aux bases de données relationnelles. Plus particulièrement, le recherche de sous-graphes s'avère être une tâche cruciale pour explorer ces grands jeux de données. Dans cette thèse, nous étudions deux problématiques principales. Dans un premier temps, nous abordons le problème de la détection de motifs dans les grands graphes. Ce problème vise à rechercher les k-meilleures correspondances (top-k) d'un graphe motif dans un graphe de données. Pour cette problématique, nous introduisons un nouveau modèle de détection de motifs de graphe nommé la Simulation Relaxée de Graphe (RGS), qui permet d’identifier des correspondances de graphes avec un certain écart (structurel) et ainsi éviter le problème de réponse vide. Ensuite, nous formalisons et étudions le problème de la recherche des k-meilleures réponses suivant deux critères, la pertinence (la meilleure similarité entre le motif et les réponses) et la diversité (la dissimilarité entre les réponses). Nous considérons également le problème des k-meilleures correspondances diversifiées et nous proposons une fonction de diversification pour équilibrer la pertinence et la diversité. En outre, nous développons des algorithmes efficaces basés sur des stratégies d’optimisation en respectant le modèle proposé. Notre approche est efficiente en terme de temps d’exécution et flexible en terme d'applicabilité. L’analyse de la complexité des algorithmes et les expérimentations menées sur des jeux de données réelles montrent l’efficacité des approches proposées. Dans un second temps, nous abordons le problème de recherche agrégative dans des documents XML. Pour un arbre requête, l'objectif est de trouver des motifs correspondants dans un ou plusieurs documents XML et de les agréger dans un seul agrégat. Dans un premier temps nous présentons la motivation derrière ce paradigme de recherche agrégative et nous expliquons les gains potentiels par rapport aux méthodes classiques de requêtage. Ensuite nous proposons une nouvelle approche qui a pour but de construire, dans la mesure du possible, une réponse cohérente et plus complète en agrégeant plusieurs résultats provenant de plusieurs sources de données. Les expérimentations réalisées sur plusieurs ensembles de données réelles montrent l’efficacité de cette approche en termes de pertinence et de qualité de résultat. / Recent years have witnessed a growing renewed interest in the use of graphs as a reliable means for representing and modeling data. Thereby, graphs enable to ensure efficiency in various fields of computer science, especially for massive data where graphs arise as a promising alternative to relational databases for big data modeling. In this regard, querying data graph proves to be a crucial task to explore the knowledge in these datasets. In this dissertation, we investigate two main problems. In the first part we address the problem of detecting patterns in larger graphs, called the top-k graph pattern matching problem. We introduce a new graph pattern matching model named Relaxed Graph Simulation (RGS), to identify significant matches and to avoid the empty-set answer problem. We formalize and study the top-k matching problem based on two classes of functions, relevance and diversity, for ranking the matches according to the RGS model. We also consider the diversified top-k matching problem, and we propose a diversification function to balance relevance and diversity. Moreover, we provide efficient algorithms based on optimization strategies to compute the top-k and the diversified top-k matches according to the proposed model. The proposed approach is optimal in terms of search time and flexible in terms of applicability. The analyze of the time complexity of the proposed algorithms and the extensive experiments on real-life datasets demonstrate both the effectiveness and the efficiency of these approaches. In the second part, we tackle the problem of graph querying using aggregated search paradigm. We consider this problem for particular types of graphs that are trees, and we deal with the query processing in XML documents. Firstly, we give the motivation behind the use of such a paradigm, and we explain the potential benefits compared to traditional querying approaches. Furthermore, we propose a new method for aggregated tree search, based on approximate tree matching algorithm on several tree fragments, that aims to build, the extent possible, a coherent and complete answer by combining several results. The proposed solutions are shown to be efficient in terms of relevance and quality on different real-life datasets
46

Optimization of memory management on distributed machine / Optimisation de la gestion mémoire sur machines distribuées

Ha, Viet Hai 05 October 2012 (has links)
Afin d'exploiter les capacités des architectures parallèles telles que les grappes, les grilles, les systèmes multi-processeurs, et plus récemment les nuages et les systèmes multi-cœurs, un langage de programmation universel et facile à utiliser reste à développer. Du point de vue du programmeur, OpenMP est très facile à utiliser en grande partie grâce à sa capacité à supporter une parallélisation incrémentale, la possibilité de définir dynamiquement le nombre de fils d'exécution, et aussi grâce à ses stratégies d'ordonnancement. Cependant, comme il a été initialement conçu pour des systèmes à mémoire partagée, OpenMP est généralement très limité pour effectuer des calculs sur des systèmes à mémoire distribuée. De nombreuses solutions ont été essayées pour faire tourner OpenMP sur des systèmes à mémoire distribuée. Les approches les plus abouties se concentrent sur l’exploitation d’une architecture réseau spéciale et donc ne peuvent fournir une solution ouverte. D'autres sont basées sur une solution logicielle déjà disponible telle que DMS, MPI ou Global Array, et par conséquent rencontrent des difficultés pour fournir une implémentation d'OpenMP complètement conforme et à haute performance. CAPE — pour Checkpointing Aided Parallel Execution — est une solution alternative permettant de développer une implémentation conforme d'OpenMP pour les systèmes à mémoire distribuée. L'idée est la suivante : en arrivant à une section parallèle, l'image du thread maître est sauvegardé et est envoyée aux esclaves ; puis, chaque esclave exécute l'un des threads ; à la fin de la section parallèle, chaque threads esclaves extraient une liste de toutes modifications ayant été effectuées localement et la renvoie au thread maître ; le thread maître intègre ces modifications et reprend son exécution. Afin de prouver la faisabilité de cette approche, la première version de CAPE a été implémentée en utilisant des points de reprise complets. Cependant, une analyse préliminaire a montré que la grande quantité de données transmises entre les threads et l’extraction de la liste des modifications depuis les points de reprise complets conduit à de faibles performances. De plus, cette version est limitée à des problèmes parallèles satisfaisant les conditions de Bernstein, autrement dit, il ne permet pas de prendre en compte les données partagées. L'objectif de cette thèse est de proposer de nouvelles approches pour améliorer les performances de CAPE et dépasser les restrictions sur les données partagées. Tout d'abord, nous avons développé DICKPT (Discontinuous Incremental ChecKPoinTing), une technique points de reprise incrémentaux qui supporte la possibilité de prendre des points de reprise discontinue lors de l'exécution d'un processus. Basé sur DICKPT, la vitesse d'exécution de la nouvelle version de CAPE a été considérablement augmenté. Par exemple, le temps de calculer une grande multiplication matrice-matrice sur un cluster des ordinateurs bureaux est devenu très similaire à la durée d'exécution d'un programme MPI optimisé. En outre, l'accélération associée à cette nouvelle version pour divers nombre de threads est assez linéaire pour différentes tailles du problème. Pour des données partagées, nous avons proposé UHLRC (Updated Home-based Lazy Relaxed Consistency), une version modifiée de la HLRC (Home-based Lazy Relaxed Consistency) modèle de mémoire, pour le rendre plus adapté aux caractéristiques de CAPE. Les prototypes et les algorithmes à mettre en œuvre la synchronisation des données et des directives et clauses de données partagées sont également précisées. Ces deux travaux garantit la possibilité pour CAPE de respecter des demandes de données partagées d'OpenMP / In order to explore further the capabilities of parallel computing architectures such as grids, clusters, multi-processors and more recently, clouds and multi-cores, an easy-to-use parallel language is an important challenging issue. From the programmer's point of view, OpenMP is very easy to use with its ability to support incremental parallelization, features for dynamically setting the number of threads and scheduling strategies. However, as initially designed for shared memory systems, OpenMP is usually limited on distributed memory systems to intra-nodes' computations. Many attempts have tried to port OpenMP on distributed systems. The most emerged approaches mainly focus on exploiting the capabilities of a special network architecture and therefore cannot provide an open solution. Others are based on an already available software solution such as DMS, MPI or Global Array and, as a consequence, they meet difficulties to become a fully-compliant and high-performance implementation of OpenMP. As yet another attempt to built an OpenMP compliant implementation for distributed memory systems, CAPE − which stands for Checkpointing Aide Parallel Execution − has been developed which with the following idea: when reaching a parallel section, the master thread is dumped and its image is sent to slaves; then, each slave executes a different thread; at the end of the parallel section, slave threads extract and return to the master thread the list of all modifications that has been locally performed; the master includes these modifications and resumes its execution. In order to prove the feasibility of this paradigm, the first version of CAPE was implemented using complete checkpoints. However, preliminary analysis showed that the large amount of data transferred between threads and the extraction of the list of modifications from complete checkpoints lead to weak performance. Furthermore, this version was restricted to parallel problems satisfying the Bernstein's conditions, i.e. it did not solve the requirements of shared data. This thesis aims at presenting the approaches we proposed to improve CAPE' performance and to overcome the restrictions on shared data. First, we developed DICKPT which stands for Discontinuous Incremental Checkpointing, an incremental checkpointing technique that supports the ability to save incremental checkpoints discontinuously during the execution of a process. Based on the DICKPT, the execution speed of the new version of CAPE was significantly increased. For example, the time to compute a large matrix-matrix product on a desktop cluster has become very similar to the execution time of the same optimized MPI program. Moreover, the speedup associated with this new version for various number of threads is quite linear for different problem sizes. In the side of shared data, we proposed UHLRC, which stands for Updated Home-based Lazy Release Consistency, a modified version of the Home-based Lazy Release Consistency (HLRC) memory model, to make it more appropriate to the characteristics of CAPE. Prototypes and algorithms to implement the synchronization and OpenMP data-sharing clauses and directives are also specified. These two works ensures the ability for CAPE to respect shared-data behavior
47

New Heuristics for Planning with Action Costs

Keyder, Emil Ragip 17 December 2010 (has links)
Classical planning is the problem of nding a sequence of actions that take an agent from an initial state to a desired goal situation, assuming deter- ministic outcomes for actions and perfect information. Satis cing planning seeks to quickly nd low-cost solutions with no guarantees of optimality. The most e ective approach for satis cing planning has proved to be heuristic search using non-admissible heuristics. In this thesis, we introduce several such heuristics that are able to take into account costs on actions, and there- fore try to minimize the more general metric of cost, rather than length, of plans, and investigate their properties and performance. In addition, we show how the problem of planning with soft goals can be compiled into a classical planning problem with costs, a setting in which cost-sensitive heuristics such as those presented here are essential. / La plani caci on cl asica es el problema que consiste en hallar una secuencia de acciones que lleven a un agente desde un estado inicial a un objetivo, asum- iendo resultados determin sticos e informaci on completa. La plani caci on \satis cing" busca encontrar una soluci on de bajo coste, sin garant as de op- timalidad. La b usqueda heur stica guiada por heur sticas no admisibles es el enfoque que ha tenido mas exito. Esta tesis presenta varias heur sticas de ese g enero que consideran costes en las acciones, y por lo tanto encuentran soluciones que minimizan el coste, en lugar de la longitud del plan. Adem as, demostramos que el problema de plani caci on con \soft goals", u objetivos opcionales, se puede reducir a un problema de plani caci on clasica con costes en las acciones, escenario en el que heur sticas sensibles a costes, tal como las aqu presentadas, son esenciales.
48

Défauts et diffusion dans le silicium amorphe

Diop, Ousseynou 08 1900 (has links)
Nous avons observé une augmentation ‘’transient’’du taux de cristallisation interfacique de l’a-Si lorsqu’on réimplante du Si à proximité de l’interface amorphe/cristal. Après amorphisation et traitement thermique à 650°C pendant 5s de la couche a-Si crée par implantation ionique, une partie a été réimplantée. Les défauts produits par auto-réimplantation à 0.7MeV se trouvent à (302±9) nm de l’interface initiale. Cela nous a permis d’étudier d’avantage la variation initiale de la vitesse SPE (Épitaxie en phase solide). Avec des recuit identiques de 4h à 500°C, nous avons déterminé les positions successives des interfaces et en déduit les taux de cristallisation SPE. La cristallisation débute à l’interface et continue graduellement vers la surface. Après le premier recuit, (252±11) nm s’est recristallisé dans la zone réimplantée soit un avancement SPE de 1.26x10^18at./cm2. Cette valeur est environ 1.50 fois plus importante que celle dans l’état relaxé. Nous suggérons que la présence de défauts à proximité de l’interface a stimulé la vitesse initiale. Avec le nombre de recuit, l’écart entre les vitesses diminue, les deux régions se cristallisent presque à la même vitesse. Les mesures Raman prises avant le SPE et après chaque recuit ont permis de quantifier l’état de relaxation de l’a-Si et le transfert de l’état dé-relaxé à relaxé. / We observed a ‘’transient’’ increase of planar crystallization rate of a-Si when one reimplanted Si near the interface amorphous / crystal. After amorphization and heat treatment at 650°C for 5s, one part has been re-implanted. The defects produced at 0.7 MeV by self-re-implantation are located at (302±9) nm of the initial interface. This allows us to better study the initial variation of SPE speed (solid phase epitaxy). With recrystallisation anneals at 500±4°C for 4h, we have determined the successive positions of the interfaces and have deduced the SPE recrystallization rate. Crystallization began at the interface and continues gradually to the surface. After the first annealing, (252±11)nm was recrystallized in the re-implanted state. That means 1.26x10^18at./cm2 SPE enhancement. This value is approximately 1.50 times greater than that in the relaxed state. We suggest that the presence of defects near the interface stimulate the speed. Raman measurements taken after each annealing allowed us to know the transfer of the un-relaxed state to the relaxed state. After the number of anneals treatments, both areas progress almost at the same speed / Dans ce travail nous avons étudié le phénomène de diffusion du cuivre et de l’argent dans a-Si en présence de l’hydrogène à la température de la pièce et de recuit. Une couche amorphe de 0.9μm d’épaisseur a été produite par implantation de 28Si+ à 500 keV sur le c-Si (100). Après celle-ci, on procède à l’implantation du Cu et de l’Ag. Un traitement thermique a produit une distribution uniforme des impuretés dans la couche amorphe et la relaxation de défauts substantiels. Certains défauts dans a-Si sont de type lacune peuvent agir comme des pièges pour la mobilité du Cu et de l’Ag. L’hydrogène implanté après traitement thermique sert à dé-piéger les impuretés métalliques dans certaines conditions. Nous n’avons détecté aucune diffusion à la température de la pièce au bout d’un an, par contre un an après à la température de recuit (1h à 450°C) on observe la diffusion de ces métaux. Ce qui impliquerait qu’à la température de la pièce, même si l’hydrogène a dé-piégé les métaux mais ces derniers n’ont pas pu franchir une barrière d’énergie nécessaire pour migrer dans le réseau. / In this work we studied the diffusion phenomenon of copper and silver in a-Si in the presence of hydrogen at room temperature and annealing temperature. The 0.9 μm -thick a-Si layers were formed by ion implantation 28Si + at 500 keV on c-Si (100). After this Cu ions and Ag ions were implanted at 90keV.The heat treatment produces a uniform distribution of impurities in the amorphous layer and the relaxation of substantial defects. Vacancies defects in a-Si can act as traps for the mobility of Cu and Ag. Hydrogen implanted is used to de-trap metal impurities such as Cu and Ag. However we did not detect any diffusion at room temperature during 1 year, but after one year at the annealing temperature (450°C for 1h) we observe the distribution of these metals. Implying that the room at temperature, although the hydrogen de-trapping metals but they could not crossed an energy barrier required to migrate in the network.
49

Défauts et diffusion dans le silicium amorphe

Diop, Ousseynou 08 1900 (has links)
Nous avons observé une augmentation ‘’transient’’du taux de cristallisation interfacique de l’a-Si lorsqu’on réimplante du Si à proximité de l’interface amorphe/cristal. Après amorphisation et traitement thermique à 650°C pendant 5s de la couche a-Si crée par implantation ionique, une partie a été réimplantée. Les défauts produits par auto-réimplantation à 0.7MeV se trouvent à (302±9) nm de l’interface initiale. Cela nous a permis d’étudier d’avantage la variation initiale de la vitesse SPE (Épitaxie en phase solide). Avec des recuit identiques de 4h à 500°C, nous avons déterminé les positions successives des interfaces et en déduit les taux de cristallisation SPE. La cristallisation débute à l’interface et continue graduellement vers la surface. Après le premier recuit, (252±11) nm s’est recristallisé dans la zone réimplantée soit un avancement SPE de 1.26x10^18at./cm2. Cette valeur est environ 1.50 fois plus importante que celle dans l’état relaxé. Nous suggérons que la présence de défauts à proximité de l’interface a stimulé la vitesse initiale. Avec le nombre de recuit, l’écart entre les vitesses diminue, les deux régions se cristallisent presque à la même vitesse. Les mesures Raman prises avant le SPE et après chaque recuit ont permis de quantifier l’état de relaxation de l’a-Si et le transfert de l’état dé-relaxé à relaxé. / We observed a ‘’transient’’ increase of planar crystallization rate of a-Si when one reimplanted Si near the interface amorphous / crystal. After amorphization and heat treatment at 650°C for 5s, one part has been re-implanted. The defects produced at 0.7 MeV by self-re-implantation are located at (302±9) nm of the initial interface. This allows us to better study the initial variation of SPE speed (solid phase epitaxy). With recrystallisation anneals at 500±4°C for 4h, we have determined the successive positions of the interfaces and have deduced the SPE recrystallization rate. Crystallization began at the interface and continues gradually to the surface. After the first annealing, (252±11)nm was recrystallized in the re-implanted state. That means 1.26x10^18at./cm2 SPE enhancement. This value is approximately 1.50 times greater than that in the relaxed state. We suggest that the presence of defects near the interface stimulate the speed. Raman measurements taken after each annealing allowed us to know the transfer of the un-relaxed state to the relaxed state. After the number of anneals treatments, both areas progress almost at the same speed / Dans ce travail nous avons étudié le phénomène de diffusion du cuivre et de l’argent dans a-Si en présence de l’hydrogène à la température de la pièce et de recuit. Une couche amorphe de 0.9μm d’épaisseur a été produite par implantation de 28Si+ à 500 keV sur le c-Si (100). Après celle-ci, on procède à l’implantation du Cu et de l’Ag. Un traitement thermique a produit une distribution uniforme des impuretés dans la couche amorphe et la relaxation de défauts substantiels. Certains défauts dans a-Si sont de type lacune peuvent agir comme des pièges pour la mobilité du Cu et de l’Ag. L’hydrogène implanté après traitement thermique sert à dé-piéger les impuretés métalliques dans certaines conditions. Nous n’avons détecté aucune diffusion à la température de la pièce au bout d’un an, par contre un an après à la température de recuit (1h à 450°C) on observe la diffusion de ces métaux. Ce qui impliquerait qu’à la température de la pièce, même si l’hydrogène a dé-piégé les métaux mais ces derniers n’ont pas pu franchir une barrière d’énergie nécessaire pour migrer dans le réseau. / In this work we studied the diffusion phenomenon of copper and silver in a-Si in the presence of hydrogen at room temperature and annealing temperature. The 0.9 μm -thick a-Si layers were formed by ion implantation 28Si + at 500 keV on c-Si (100). After this Cu ions and Ag ions were implanted at 90keV.The heat treatment produces a uniform distribution of impurities in the amorphous layer and the relaxation of substantial defects. Vacancies defects in a-Si can act as traps for the mobility of Cu and Ag. Hydrogen implanted is used to de-trap metal impurities such as Cu and Ag. However we did not detect any diffusion at room temperature during 1 year, but after one year at the annealing temperature (450°C for 1h) we observe the distribution of these metals. Implying that the room at temperature, although the hydrogen de-trapping metals but they could not crossed an energy barrier required to migrate in the network.
50

Access Path Based Dataflow Analysis For Sequential And Concurrent Programs

Arnab De, * 12 1900 (has links) (PDF)
In this thesis, we have developed a flow-sensitive data flow analysis framework for value set analyses for Java-like languages. Our analysis frame work is based on access paths—a variable followed by zero or more field accesses. We express our abstract states as maps from bounded access paths to abstract value sets. Using access paths instead of allocation sites enables us to perform strong updates on assignments to dynamically allocated memory locations. We also describe several optimizations to reduce the number of access paths that need to be tracked in our analysis. We have instantiated this frame work for flow-sensitive pointer and null-pointer analysis for Java. We have implemented our analysis inside the Chord frame work. A major part of our implementation is written declaratively using Datalog. We leverage the use of BDDs in Chord for keeping our memory usage low. We show that our analysis is much more precise and faster than traditional flow-sensitive and flow-insensitive pointer and null-pointer analysis for Java. We further extend our access path based analysis frame work to concurrent Java programs. We use the synchronization structure of the programs to transfer abstract states from one thread to another. Therefore, we do not need to make conservative assumptions about reads or writes to shared memory. We prove our analysis to be sound for the happens-before memory model, which is weaker than most common memory models, including sequential consistency and the Java Memory Model. We implement a null-pointer analysis for concurrent Java programs and show it to be more precise than the traditional analysis.

Page generated in 0.0389 seconds