• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 4
  • 4
  • Tagged with
  • 11
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Register allocation and spilling using the expected distance heuristic

Burroughs, Ivan Neil 12 April 2016 (has links)
The primary goal of the register allocation phase in a compiler is to minimize register spills to memory. Spills, in the form of store and load instructions, affect execution time as the processor must wait for the slower memory system to respond. Deciding which registers to spill can benefit from execution frequency information yet when this information is available it is not fully utilized by modern register allocators. We present a register allocator that fully exploits profiling information to mini- mize the runtime costs of spill instructions. We use the Furthest Next Use heuristic, informed by branch probability information to decide which virtual register to spill when required. We extend this heuristic, which under the right conditions can lead to the minimum number of spills, to the control flow graph by computing Expected Distance to next use. The furthest next use heuristic, when applied to the control flow graph, only par- tially determines the best placement of spill instructions. We present an algorithm for optimizing spill instruction placement in the graph that uses block frequency infor- mation to minimize execution costs. Our algorithm quickly finds the best placements for spill instructions using a novel method for solving placement problems. We evaluate our allocator using both static and dynamic profiling information for the SPEC CINT2000 benchmark and compare it to the LLVM allocator. Targeting the ARMv7 architecture, we find average reductions in numbers of store and load instructions of 36% and 50%, respectively, using static profiling and 52% and 52% using dynamic profiling. We have also seen an overall improvement in benchmark speed. / Graduate
2

Continuously Providing Approximate Results under Limited Resources: Load Shedding and Spilling in XML Streams

Wei, Mingzhu 18 December 2011 (has links)
" Because of the high volume and unpredictable arrival rates, stream processing systems may not always be able to keep up with the input data streams, resulting in buffer overflow and uncontrolled loss of data. To continuously supply online results, two alternate solutions to tackle this problem of unpredictable failures of such overloaded systems can be identified. One technique, called load shedding, drops some fractions of data from the input stream to reduce the memory and CPU requirements of the workload. However, dropping some portions of the input data means that the accuracy of the output is reduced since some data is lost. To produce eventually complete results, the second technique, called data spilling, pushes some fractions of data to persistent storage temporarily when the processing speed cannot keep up with the arrival rate. The processing of the disk resident data is then postponed until a later time when system resources become available. This dissertation explores these load reduction technologies in the context of XML stream systems. Load shedding in the specific context of XML streams poses several unique opportunities and challenges. Since XML data is hierarchical, subelements, extracted from different positions of the XML tree structure, may vary in their importance. Further, dropping different subelements may vary in their savings of storage and computation. Hence, unlike prior work in the literature that drops data completely or not at all, in this dissertation we introduce the notion of structure-oriented load shedding, meaning selectively some XML subelements are shed from the possibly complex XML objects in the XML stream. First we develop a preference model that enables users to specify the relative importance of preserving different subelements within the XML result structure. This transforms shedding into the problem of rewriting the user query into shed queries that return approximate answers with their utility as measured by the user preference model. Our optimizer finds the appropriate shed queries to maximize the output utility driven by our structure-based preference model under the limitation of available computation resources. The experimental results demonstrate that our proposed XML-specific shedding solution consistently achieves higher utility results compared to the existing relational shedding techniques. Second, we introduces structure-based spilling, a spilling technique customized for XML streams by considering the spilling of partial substructures of possibly complex XML elements. Several new challenges caused by structure-based spilling are addressed. When a path is spilled, multiple other paths may be affected. We categorize varying types of spilling side effects on the query caused by spilling. How to execute the reduced query to produce the correct runtime output is also studied. Three optimization strategies are developed to select the reduced query that maximizes the output quality. We also examine the clean-up stage to guarantee that an entire result set is eventually generated by producing supplementary results to complement the partial results output earlier. The experimental study demonstrates that our proposed solutions consistently achieve higher quality results compared to the state-of-the-art techniques. Third, we design an integrated framework that combines both shedding and spilling policies into one comprehensive methodology. Decisions on the choice of whether to shed or spill data may be affected by the application needs and data arrival patterns. For some input data, it may be worth to flush it to disk if a delayed output of its result will be important, while other data would best directly dropped from the system given that a delayed delivery of these results would no longer be meaningful to the application. Therefore we need sophisticated technologies capable of deploying both shedding and spilling techniques within one integrated strategy with the ability to deliver the most appropriate decision customers need for each specific circumstance. We propose a novel flexible framework for structure-based shed and spill approaches, applicable in any XML stream system. We propose a solution space that represents all the shed and spill candidates. An age-based quality model is proposed for evaluating the output quality for different reduced query and supplementary query pairs. We also propose a family of four optimization strategies, OptF, OptSmart, HiX and Fex. OptF and OptSmart are both guaranteed to identify an optimal solution of reduced and supplementary query pair, with OptSmart exhibiting significantly less overhead than OptF. HiX and Fex use heuristic-based approaches that are much more efficient than OptF and OptSmart. "
3

Decoupled (SSA-based) register allocators : from theory to practice, coping with just-in-time compilation and embedded processors constraints / Allocation de registres découplée (basée sur la formulation SSA) : De la théorie à la pratique, faire face aux contraintes liées à la compilation juste à temps et aux processeurs embarqués

Colombet, Quentin 07 December 2012 (has links)
Ma thèse porte sur l’allocation de registres. Durant cette étape, le compilateur doit assigner les variables du code source, en nombre arbitrairement grand, aux registres physiques du processeur, en nombre limité k. Des travaux récents, notamment ceux des thèses de F. Bouchez et S. Hack, ont montré qu’il était possible de séparer de manière complètement découplée cette étape en deux phases : le vidage (spill) – stockage de variables en mémoire pour libérer des registres – suivi de l’assignation aux registres proprement dite. Ces travaux démontraient la faisabilité de ce découpage en s’appuyant sur un cadre théorique et certaines hypothèses simplificatrices. En particulier, il est suffisant de s’assurer qu’après le spill, le nombre de variables simultanément en vie est inférieur à k.Ma thèse fait suite à ces travaux en montrant comment appliquer ce type d’approche dans un cadre réaliste, en prenant en compte les contraintes liées à l’encodage des instructions, à l’ABI (application binary interface), aux bancs de registres avec aliasing. Différentes approches sont proposées qui permettent soit de s’affranchir des problèmes précités, soit de les prendre en compte directement dans le modèle théorique. Les hypothèses des modèles et les solutions proposées sont évaluées et validées par une étude expérimentale poussée dans le compilateur de STMicroelectronics. Enfin, l’ensemble de ces travaux a été réalisé avec, en ligne de mire, les contraintes de la compilation moderne, la compilation JIT (just-in-time), où rapidité et consommation mémoire du compilateur sont des facteurs déterminants. Nous nous sommes efforcés d’offrir des solutions satisfaisant ces critères ou améliorant les résultats attendus tant qu’un certain budget n’a pas été dépassé, exploitant en particulier la forme SSA (static single assignment) pour définir des algorithmes de type tree scan qui généralisent les approches de type linear scan, proposées pour le JIT. / My thesis deals with register allocation. During this phase, the compiler has to assign variables of the source program, in an arbitrary big number, to actual registers of the processor, in a limited number k. Recent works, for instance the thesis of F. Bouchez and S. Hack, have shown that it is possible to split in two different decoupled step this phase: the spill - store the variables into memory to release registers - followed by the registers assignment. These works demonstrate the feasibility of this decoupling relying on a theoretic framework and some assumptions. In particular, it is sufficient to ensure after the spill step that the number of variables simultaneously live is below k.My thesis follows these works by showing how to apply this kind of approach when real-world constraints come in play: instructions encoding, ABI (application binary interface), register aliasing. Different approaches are proposed. They allow either to ignore these problems or to directly tackle them into the theoretic framework. The hypothesis of the models and the proposed solutions are evaluated and validated using a thorough experimental study with the compiler of STMicroelectronics. Finally, all these works have been done with the constraints of modern compilers in mind, the JIT (just-in-time) compilation, where the compilation time et the memory footprint of the compiler are key factors. We strive to offer solutions that cope with these criteria or improve the result until a given budget is reached. We, in particular, used the SSA (static single assignment) form to define algorithm like tree scan that generalizes linear scan based approaches proposed for JIT compilation.
4

Hydrodynamique extrême en mer près des côtes. / Extreme Hydrodynamic in coastal environment

Robin, Pauline 18 July 2013 (has links)
Lors d'événements météorologiques extrêmes, cyclones ou tempêtes, les états de mer exceptionnels avec un déferlement important provoquent une montée du niveau de l'eau sur le littoral (surcote) et un envahissement des terres émergées amplifié par l'effet du vent. Cela peut causer d'énormes dégâts humains et matériels (Xynthia, février 2010).Dans un premier temps, le but de ce travail était de comprendre les mécanismes hydrodynamiques mis en jeu lors des tempêtes. Pour cela, nous avons mis en place une campagne de mesures dans la grande soufflerie air-eau de l'IRPHE à Luminy. L'objectif était de quantifier le runup des vagues en fonction d'un vent onshore (jusqu'à 15m/s) pour différentes conditions de houle (régulière, irrégulière ou vent seul) mais aussi de connaître les effets du vent sur les caractéristiques de la houle et sur les courants moyens près du rivage.Dans un deuxième temps, un modèle numérique de type Boussinesq dans le domaine temporel a ensuite été développé en tenant compte des effets combinés vagues/vent/déferlement. Ce modèle est basé sur l'approche de Bingham et al. (2009). Il intègre une bathymétrie variable pour modéliser la propagation d'ondes à l'abord du rivage. Un terme d'amplification des vagues par le vent, Jeffreys (1925, 1926), Miles (1957) ainsi qu'un terme de dissipation pour tenir compte du déferlement ont été introduit, Madsen et al. (1997a), Muscari et Di Mascio (2002). Enfin un modèle simple de runup a été implémenté, Hibberd et Peregrine (1979), Lynett et al. (2002). Afin d'être validé le modèle est confronté à diverses expériences de la littérature, ou à des expériences déjà réalisées au sein du laboratoire. / During extreme weather events like cyclones and storms, the extreme states sea withimportant breaking cause a level setup and coastal flooding magnified by strong wind and can cause human and material damage (Xynthia, february 2010).Initially, the aim of this work is to understand the hydrodynamics mechanisms when storms. To understand this, a measurement campaign was carried out in large air/sea facility at Marseille Luminy. The objective was to quantify maximum runup due to the wind and waves during onshore wind (up to 15m/s) for different conditions of waves (regular, irregular or only wind) and also to know the effect of wind on the waves characteristics and currents near the shore.A Boussinesq-type model in time domain has been developed taking into account the combined effects of wave/wind/breaking. The model that we use is based on the model developed by Bingham et al. (2009). It incorporates a variable bathymetry to simulate the propagation of the waves from offshore to the coastal environment. Amplification of waves by wind is added in two different ways. Jeffreys (1925, 1926), Miles (1957) and to take into account the dissipation of wave, we included a dissipation term, Madsen et al. (1997a), Muscari and Di Mascio (2002). Finally, we introduced the simple runup model of Hibberd et Peregrine (1979), Lynett et al. (2002). In order to validate the model, we have compared our results with differents experiments.
5

Étude des problèmes de <i>spilling</i> et <i>coalescing</i> liés à l'allocation de registres en tant que deux phases distinctes

Bouchez, Florent 30 April 2009 (has links) (PDF)
Le but de l'allocation de registres est d'assigner les variables d'un programme aux registres ou de les " spiller " en mémoire s'il n'y a plus de registre disponible. La mémoire est bien plus lente, il est donc préférable de minimiser le spilling. Ce problème est difficile il est étroitement lié à la colorabilité du programme. Chaitin et al. [1981] ont modélisé l'allocation de registres en le coloriage du graphe d'interférence, qu'ils ont prouvé NP-complet, il n'y a donc pas dans ce modèle de test exact qui indique s'il est nécessaire ou non de faire du spill, et si oui quoi spiller et où. Dans l'algorithme de Chaitin et al., une variable spillée est supprimée dans tout le programme, ce qui est inefficace aux endroits où suffisamment de registres sont encore disponibles. Pour palier ce problème, de nombreux auteurs ont remarqué que l'on peut couper les intervalles de vie des variables grâce à l'insertion d'instructions de copies, ce qui crée des plus petits intervalles et permet de spiller les variables sur des domaines plus réduits. La difficulté est alors de choisir les bons endroits où couper les intervalles. En pratique, on obtient de meilleurs résultats si les intervalles sont coupés en de très nom- breux points [Briggs, 1992; Appel and George, 2001], on attend alors du coalescing qu'il enlève la plupart de ces copies, mais s'il échoue, le bénéfice d'avoir un meilleur spill peut être annulé. C'est pour cette raison que Appel and George [2001] ont créé le " Coalescing Challenge ". Récemment (2004), trois équipes ont découvert que le graphe d'interférence d'un programme sous la forme Static Single Assignment (SSA) sont cordaux. Colorier le graphe devient alors facile avec un schéma d'élimination simpliciel et la communauté se demande si SSA simplifie l'allocation de registres. Nos espoirs étaient que, comme l'était le coloriage, le spilling et le coalescing deviennent plus facilement résolubles puisque nous avons à présent un test de coloriage exact. Notre premier but a alors été de mieux comprendre d'où venait la complexité de l'allocation de registres, et pourquoi le SSA semble simplifier le problème. Nous sommes revenus à la preuve originelle de Chaitin et al. [1981] pour mettre en évidence que la difficulté vient de la présence d'arcs critiques et de la possibilité d'effectuer des permutations de couleurs ou non. Nous avons étudié le problème du spill sous SSA et différentes versions du problème de coalescing : les cas généraux sont NP-complets mais nous avons trouvé un résultat polynomial pour le coalescing incrémental sous SSA. Nous nous en sommes servis pour élaborer de nouvelles heuristiques plus efficaces pour le problème du coalescing, ce qui permet l'utilisation d'un découpage agressif des intervalles de vie. Ceci nous a conduit à recommander un meilleur schéma pour l'allocation de reg- istres. Alors que les tentatives précédentes donnaient des résultats mitigés, notre coa- lescing amélioré permet de séparer proprement l'allocation de registres en deux phases indépendantes : premièrement, spiller pour réduire la pression registre, en coupant po- tentiellement de nombreuses fois ; deuxièmement, colorier les variables et appliquer le coalescing pour supprimer le plus de copies possible. Ce schéma devrait être très efficace dans un compilateur de type agressif, cepen- dant, le grand nombre de coupes et l'augmentation du temps de compilation nécessaire pour l'exécution du coalescing sont prohibitifs à l'utilisation dans un cadre de com- pilation just-in-time (JIT). Nous avons donc créé une nouvelle heuristique appelée " déplacement de permutation ", faite pour être utilisée avec un découpage selon SSA, qui puisse remplacer notre coalescing dans ce contexte.
6

Théorie et applications des systèmes polyphasiques dispersés aux cultures cellulaires en chémostat/Theory and applications of polyphasic dispersed systems to chemostat cellular cultures

Thierie, Jacques GE 05 September 2005 (has links)
Résumé Les systèmes microbiologiques naturels (colonne d’eau), semi-naturels (station d’épuration), mais surtout industriels ou de laboratoire (bioréacteurs) sont communément représentés par des modèles mathématiques destinés à l’étude, à la compréhension des phénomènes ou au contrôle des processus (de production, par exemple). Dans l’énorme majorité des cas, lorsque les cellules (procaryotes ou eucaryotes) mises en jeu dans ces systèmes sont en suspension, le formalisme de ces modèles non structurés traite le système comme s’il était homogène. Or, en toute rigueur, il est clair que cette approche n’est qu’une approximation et que nous avons à faire à des phénomènes hétérogènes, formés de plusieurs phases (solide, liquide, gazeuse) intimement mélangées. Nous désignons ces systèmes comme « polyphasiques dispersés » (SPD). Ce sont des systèmes thermodynami-quement instables, (presque) toujours ouverts. La démarche que nous avons entreprise consiste à examiner si le fait de considérer des systèmes dits « homogènes » comme des systèmes hétérogènes (ce qu’ils sont en réalité) apporte, malgré une complication du traitement mathématique, un complément d’information significatif et pertinent. La démarche s’est faite en deux temps : · Une étape purement théorique, destinée à établir de manière rigoureuse et générale les bilans de matière pour chaque composé du système dans chacune de ces phases. · Une étape appliquée, visant à démontrer, au travers d’exemples concrets, la validité du concept et de la démarche. Pour l’étude des applications, pour diverses raisons, nous avons choisi d’étudier un bioréacteur ouvert « simple », le chémostat. Les bilans généraux dérivés à la première étape ont donc été appliqués à ce réacteur et plusieurs exemples, tirés de la littérature, pour la plupart, ont été traités dans le cadre des SPD. Les principaux résultats exposés dans le travail concernent : - sur le plan général, la pertinence d’une partition des systèmes en plusieurs phases, ce qui fait apparaître à la fois des flux d’échange interphasiques (qui n’apparaissent pas dans les systèmes dits monophasiques) et la possibilité de représenter le système à plusieurs niveaux de description. - quant aux applications, outre quelques petits exemples simples, nous proposons 1) un nouveau mécanisme pour représenter la dissipation de l’énergie cellulaire (un domaine encore très controversé), grâce à une approche implicite (c’est-à-dire, sans hypothèses particulières sur la forme des cinétiques intracellulaires) et 2) un modèle simple, original et innovant pour expliquer les signaux chimiques intercellulaires, les phénomènes de seuil et le branchement métabolique respiro-fermentatif en général et chez Saccharomyces cerevisiae en particulier, un mécanisme d’intérêt fondamental et industriel (levuristes et fermentations alcooliques). Abstract. Natural microbiological systems (rivers, seas, …), semi-natural (wastewater treatment plants), but especially industrial or lab-scale systems (bioreactors) are commonly represented by mathematical models intended for the study, the understanding of phenomena or for the control of processes (production, for example). In almost in every case, when the cells (prokaryotic or eukaryotic) concerned in these systems are in suspension, the formalism of these unstructured models treats the system as if it were homogeneous. However, in any rigor, this approach is clearly only an approximation and we have to deal with heterogeneous phenomena, formed of several phases (solid, liquid, gas) closely mixed. We refer to these systems as “polyphasic dispersed systems” (PDS). They are thermodynamically unstable systems, and are (practically) always open. The approach we undertook consists in examining if treating apparent «homogeneous» systems as heterogeneous systems (what they actually are) brings, in spite of some mathematical complications, further significant and relevant information’s. We proceeded in two steps: · A purely theoretical stage, intended to establish in a rigorous and general way the mass balances for each compound in each phases of the system. · A applied stage, aiming at showing, through concrete examples, the soundness of the concept and of the method. Concerning the applications, for several reasons, we chose to study a “simple” open bioreactor: the chemostat. The general balances previously derived in a general way were hence applied to this reactor and a number of examples, mainly obtained from the literature, were treated within the PDS framework. The principal results presented in this work concern: - on the general level, the importance of partitioning the system in different phases, enlightening at the same time interphasic exchange flows (which do not appear in the systems known as monophasic) and the possibility of representing the system on several levels of description. - concerning the applications, in addition to some small simple examples, we propose 1) a new mechanism representing the cellular energy dissipation (a still very controversial field), using an implicit approach (i.e., without particular assumptions about the form of the intracellular kinetics) and 2) a simple, original and inventive model explaining cellular chemical signaling, threshold phenomena and a general metabolic switch occurring during respirofermentative transition. The latter was especially tested on Saccharomyces cerevisiae data to interpret the Crabtree effect in yeast, a mechanism of fundamental and industrial importance (in connection with baker’s yeast production and alcoholic fermentations).
7

Decoupled (SSA-based) register allocators : from theory to practice, coping with just-in-time compilation and embedded processors constraints

Colombet, Quentin 07 December 2012 (has links) (PDF)
My thesis deals with register allocation. During this phase, the compiler has to assign variables of the source program, in an arbitrary big number, to actual registers of the processor, in a limited number k. Recent works, for instance the thesis of F. Bouchez and S. Hack, have shown that it is possible to split in two different decoupled step this phase: the spill - store the variables into memory to release registers - followed by the registers assignment. These works demonstrate the feasibility of this decoupling relying on a theoretic framework and some assumptions. In particular, it is sufficient to ensure after the spill step that the number of variables simultaneously live is below k.My thesis follows these works by showing how to apply this kind of approach when real-world constraints come in play: instructions encoding, ABI (application binary interface), register aliasing. Different approaches are proposed. They allow either to ignore these problems or to directly tackle them into the theoretic framework. The hypothesis of the models and the proposed solutions are evaluated and validated using a thorough experimental study with the compiler of STMicroelectronics. Finally, all these works have been done with the constraints of modern compilers in mind, the JIT (just-in-time) compilation, where the compilation time et the memory footprint of the compiler are key factors. We strive to offer solutions that cope with these criteria or improve the result until a given budget is reached. We, in particular, used the SSA (static single assignment) form to define algorithm like tree scan that generalizes linear scan based approaches proposed for JIT compilation.
8

Numerical modelling of turbulence and sediment concentrations under breaking waves using OpenFOAM®

Brown, Scott Andrew January 2017 (has links)
This thesis presents the development of a novel numerical model capable of evaluating suspended sediment dynamics under breaking waves, and is based in the open source Computational Fluid Dynamics software, OpenFOAM®. The hydrodynamics were determined by solving the incompressible, Reynolds-Averaged Navier-Stokes equations for a two-phase fluid using the Finite Volume method, along with a Volume of Fluid scheme that modelled the interface between the air and water phases. A new library of five turbulence models was developed to include weakly compressible effects through the introduction of density variations in the conservation equations. This library was thoroughly evaluated against existing physical data for surf zone dynamics. A skill score was applied, based on the MSE, to rank the models, with the nonlinear k−ε performing best overall, and the k−ω predicting turbulent kinetic energy most accurately. Furthermore, the numerical model was shown to predict the near-bed hydrodynamics well, through comparison with new in-house physical data obtained in the COAST laboratory. Suspended sediment concentrations were determined using an advection-diffusion methodology, with near-bed processes modelled using a flux based approach that balances entrainment and deposition. The model was validated against existing experimental data for steady state flow conditions, as well as for regular and breaking waves. The agreement was generally good, with the results indicating that the model is capable of capturing complicated processes such as sediment plumes under plunging breakers. The validated model was applied to investigate the properties of the sediment diffusivity, which is a vital parameter in suspended sediment dynamics. In physical experiments, sediment diffusivity is commonly estimated implicitly, based on the vertical concentration profile. In this work, this approach was applied to the numerical concentration predictions, and compared with the value directly determined within the model. The estimated value was generally acceptable providing that large horizontal concentration gradients were not present, and diffusion dominated flow advection. However, near the breaking point of waves, large errors were observed at mid-depth of the water column, which strongly correlates with a region of large flow advection relative to diffusion. Therefore, when using this estimation, caution is recommended since this approach can potentially lead to substantial discrepancies.
9

Responses of Rumen Microbes to Excess Carbohydrate

Hackmann, Timothy John 05 July 2013 (has links)
No description available.
10

The Spillable Environment: Expanding a Handheld Device's Screen Real Estate and Interactive Capabilities

Clement, Jeffrey S. 07 August 2007 (has links) (PDF)
Handheld devices have a limited amount of screen real estate. If a handheld device could take advantage of larger screens, it would create a more powerful user interface and environment. As time progresses, Moore's law predicts that the computational power of handheld devices will increase dramatically in the future, promoting the interaction with a larger screen. Users can then use their peripheral vision to recognize spatial relationships between objects and solve problems more easily with this integrated system. In the spillable environment, the handheld device uses a DiamondTouch Table, a large, touch-sensitive horizontal table, to enhance the viewing environment. When the user moves the handheld device on the DiamondTouch, the orientation of the application changes accordingly. A user can let another person see the application by rotating the handheld device in that person's direction. A user could conveniently use this system in a public area. In a business meeting, a user can easily show documents and presentations to other users around the DiamondTouch table. In an academic setting, a tutor could easily explain a concept to a student. A user could effortlessly do all of this while having all of his/her information on the handheld device. A wide range of applications could be used in these types of settings.

Page generated in 0.4654 seconds