Spelling suggestions: "subject:"[een] SSA"" "subject:"[enn] SSA""
11 |
Petals of a Rose CloseKeenan, Brendan Owen 18 September 2014 (has links)
No description available.
|
12 |
The Art of Modeling and Simulation of Multiscale Biochemical SystemsPu, Yang 14 May 2015 (has links)
In this thesis we study modeling and simulation approaches for multiscale biochemical systems. The thesis addresses both modeling methods and simulation strategies. In the first part, we propose modeling methods to study the behavior of the insulin secretion pathway. We first expand the single cell model proposed by Bertram et. al. to model multiple cells. Synchronization among multiple cells is observed. Then an unhealthy cell model is proposed to study the insulin secretion failure caused by weakening of mitochondria function. By studying the interaction between the healthy and unhealthy cells, we find that the insulin secretion can be reinstated by increasing the glucokinase level. This new discovery sheds light on antidiabetic medication. In order to study the stochastic dynamics of the insulin secretion pathway, we first apply the hybrid method to model the discrete events in the insulin secretion pathway. Based on the hybrid model, a probability based measurement is proposed and applied to test the new antidiabetic remedy. In the second part, we focus on different simulation schemes for multiscale biochemical systems. We first propose a partitioning strategy for the hybrid method which leads to an efficient way of building stochastic cell cycle models. Then different implementation methods for the hybrid method are studied. A root finding method based on inverse interpolation is introduced to implement the hybrid method with three different ODE solvers. A detailed discussion of the performance of these three ODE solvers is presented. Last, we propose a new strategy to automatically detect stiffness and identify species that cause stiffness for the Tau-Leaping method, as well as two stiffness reduction methods. The efficiency is demonstrated by applying this new strategy on a stiff decaying dimerization model and a heat shock protein regulation model. / Ph. D.
|
13 |
Stochastic Simulation Methods for Solving Systems with Multi-State SpeciesLiu, Zhen 29 May 2009 (has links)
Gillespie's stochastic simulation algorithm (SSA) has been a conventional method for stochastic modeling and simulation of biochemical systems. However, its population-based scheme faces the challenge from multi-state situations in many biochemical models. To tackle this problem, Morton-Firth and Bray's stochastic simulator (StochSim) was proposed with a particle-based scheme. The thesis first provides a detailed comparison between these two methods, and then proposes improvements on StochSim and a hybrid method to combine the advantages of the two methods. Analysis and numerical experiment results demonstrate that the hybrid method exhibits extraordinary performance for systems with both the multi-state feature and a high total population.
In order to deal with the combinatorial complexity caused by the multi-state situation, the rules-based modeling was proposed by Hlavacek's group and the particle-based Network-Free Algorithm (NFA) has been used for its simulation. In this thesis, we improve the NFA so that it has both the population-based and particle-based features. We also propose a population-based method for simulation of the rule-based models.
The bacterial chemotaxis model has served as a good biological example involving multi-state species. We implemented different simulation methods on this model. Then we constructed a graphical interface and compared the behaviors of the bacterium under different mechanisms, including simplified mathematical models and chemically reacting networks which are simulated stochastically. / Master of Science
|
14 |
[en] STOCHASTIC SIMULATION MODELS OF INFLOW SCENARIOS WITH INCORPORATION OF CLIMATE VARIABLES / [pt] MODELOS DE SIMULAÇÃO ESTOCÁSTICA DE CENÁRIOS DE VAZÃO COM INCORPORAÇÃO DE VARIÁVEIS CLIMÁTICASPAULA MEDINA MACAIRA LOURO 23 January 2019 (has links)
[pt] Apesar do crescimento exponencial da instalação de novas usinas eólicas nos últimos anos, a matriz energética Brasileira é composta, principalmente, por usinas hidrelétricas. Uma das principais características dos sistemas de geração com predominância hidráulica é a forte dependência dos regimes hidrológicos. Atualmente, o setor elétrico brasileiro utiliza a Energia Natural Afluente para gerar cenários hidrológicos a partir de um modelo PAR. Tal modelo é ajustado a partir dos parâmetros estimados do histórico da série temporal, isto é, não considera quaisquer informações exógenas que possam afetar os regimes hidrológicos e, consequentemente, a produção de energia. Estudos recentes identificaram que o uso de variáveis climáticas na modelagem de séries de afluências nas bacias brasileiras pode servir como fator de diminuição de incertezas devido a existência de correlação entre essas variáveis. Também foram identificados benefícios ao decompor as séries hidrológicas em sinal e ruído e utilizar somente o sinal para a modelagem. Neste contexto, o desenvolvimento de modelos híbridos que combinem técnicas de de composição das séries hidrológicas e modelos de séries temporais com variáveis exógenas são objetos de estudo deste trabalho,assim como o desenvolvimento de modelos que associem tais variáveis de formação-linear e periódica. Essas novas abordagens contemplam o uso das técnicas de decomposição SSA e MSSA em combinação com PAR, a aplicação do modelo PARX e o desenvolvimento do modelo PGAM. Como conclusão tem-se que os modelos aplicados se mostraram eficientes para os objetivos propostos e também apresentaram melhor performance, em alguns casos, quando comparados com modelos já publicados na literatura. / [en] Despite the exponential growth of wind farms in recent years, the Brazilian energy matrix is mainly composed of hydroelectric plants.One of the main characteristics of hydroelectric generation systems is the strong
dependence on hydrological regimes. Currently, the Brazilian electric sector uses the Natural Energy In flow to generate hydrological scenarios from a PAR model.Such model is adjusted from the estimated parameters
of the time series history, that is, it does not consider any exogenous information that could affect the hydrological regimes and, consequently, the energy production. Recent studies indicate that the use of climatic variables in the modeling of inflow series in the Brazilian basins may serve as a factor to reduce uncertainties due to the existence of correlation between these variables. It was also identified benefits by decomposing hydrological series into signal and noise and using only the signal for modeling. In this context, the development of hybrid models that combine techniques of decomposition of the hydrological series and time series models with exogenous variable are study objects of this work, as well as the development of models that associate such variables in a non-linear and periodic way. These new approaches contemplate the use of SSA and MSSA decomposition techniques in combination with PAR, the application of the PARX and the development of the PGAM model. As conclusion, the applied models were efficient for the proposed objectives and also presented better performance, in some cases, when compared with models already published in the literature.
|
15 |
Space Situational Awareness with the Swedish Allsky Meteor NetworkAlinder, Simon January 2019 (has links)
This thesis investigates the use of the Swedish Allsky Meteor Network (SAMN) for observing, identifying, and determining the orbits of satellites. The overall goal of this project is to determine the feasibility of using such a network for Space Situational Awareness (SSA) purposes, which requires identification and monitoring of objects in orbit. This thesis is a collaboration with the Swedish Defense Research Agency (FOI) to support their efforts in SSA. Within the frame of this project, the author developed software that can take data of observations of an object collected from the all-sky cameras of SAMN and do an Initial Orbit Determination (IOD) of the object. An algorithm that improves the results of the IOD was developed and integrated into the software. The software can also identify the object if it is in a database that the program has access to or, if it could not be identified, make an approximate prediction of when and where the object will be visible again the next time it flies over. A program that analyses the stability of the results of the IOD was also developed. This measures the spread in results of the IOD when a small amount of artificial noise is added to one or more of the observed coordinates in the sky. It was found that using multiple cameras at different locations greatly improves the stability of the solutions. Gauss' method was used for doing the IODs. The advantages and disadvantages of using this method are discussed, and ultimately other methods, such as the Gooding method or Double R iteration, are recommended for future works. This is mostly because Gauss' method has a singularity when all three lines of sight from observer to object lie in the same plane, which makes the results unreliable. The software was tested on a number of observations, both synthetic and real, and the results were compared against known data from public databases. It was found that these techniques can, with some changes, be used for doing IOD and satellite identification, but that doing very accurate position determination required for full orbit determination is not feasible. / Detta examensarbete undersöker möjligheterna att använda ett svenskt nätverk av allskykameror kallat SAMN (Swedish Allsky Meteor Network) för att observera, identifiera och banbestämma satelliter. Det övergripande målet med detta projekt är att bestämma hur användbart ett sådant nätverk skulle vara för att skapa en rymdlägesbild, vilken i sin tur kräver bevakning och identifikation av objekt som ligger i omloppsbana. Detta examensarbete är ett samarbete mellan Uppsala Universitet och FOI (Totalförsvarets Forskningsinstitut). Inom ramen för detta projekt har författaren utvecklat mjukvara som kan ta data från observationer av objekt utförda av SAMN och göra initiala banbestämningar av objekten. En algoritm som förbättrar resultaten av den initiala banbestämningen utvecklades och integrerades i programmen. Programmen kan också identifiera satelliter om de finns med i en databas som programmet har tillgång till eller förutsäga objektets nästa passage över observatören om det inte kunde identifieras. Ett annat program som analyserar känsligheten av resultaten av den initiala banbestämningen utvecklades också. Detta program mäter spridningen i resultat som orsakas av små störningar i de observerade koordinaterna på himlen. Det framkom att stabiliteten av resultaten kan förbättras avsevärt genom att använda flera observatörer på olika orter. I detta projekt användes Gauss metod för att göra banbestämningarna. Metodens för- och nackdelar diskuteras och i slutänden rekommenderas istället andra metoder, som Goodings metod eller Dubbel R-iteration, för framtida arbeten. Detta beror mest på att Gauss metod innehåller en singularitet när alla siktlinjer från observatören till objektet ligger i samma plan som varandra vilket gör resultaten opålitliga i de fallen. Programmen testkördes på ett antal olika observationer, både artificiella och verkliga, och resultaten jämfördes med kända positioner. Slutsatsen av arbetet är att de undersökta teknikerna kan, med vissa modifikationer, användas för att göra initiala banbestämningar och satellitidentifikationer, men att göra de väldigt precisa positionsbestämningarna som krävs för fullständig banbestämning är inte genomförbart.
|
16 |
Utilização de análise de componentes principais em séries temporais / Use of principal component analysis in time seriesTeixeira, Sérgio Coichev 12 April 2013 (has links)
Um dos principais objetivos da análise de componentes principais consiste em reduzir o número de variáveis observadas em um conjunto de variáveis não correlacionadas, fornecendo ao pesquisador subsídios para entender a variabilidade e a estrutura de correlação dos dados observados com uma menor quantidade de variáveis não correlacionadas chamadas de componentes principais. A técnica é muito simples e amplamente utilizada em diversos estudos de diferentes áreas. Para construção, medimos a relação linear entre as variáveis observadas pela matriz de covariância ou pela matriz de correlação. Entretanto, as matrizes de covariância e de correlação podem deixar de capturar importante informações para dados correlacionados sequencialmente no tempo, autocorrelacionados, desperdiçando parte importante dos dados para interpretação das componentes. Neste trabalho, estudamos a técnica de análise de componentes principais que torna possível a interpretação ou análise da estrutura de autocorrelação dos dados observados. Para isso, exploramos a técnica de análise de componentes principais para o domínio da frequência que fornece para dados autocorrelacionados um resultado mais específico e detalhado do que a técnica de componentes principais clássica. Pelos métodos SSA (Singular Spectrum Analysis) e MSSA (Multichannel Singular Spectrum Analysis), a análise de componentes principais é baseada na correlação no tempo e entre as diferentes variáveis observadas. Essas técnicas são muito utilizadas para dados atmosféricos na identificação de padrões, tais como tendência e periodicidade. / The main objective of principal component analysis (PCA) is to reduce the number of variables in a small uncorrelated data sets, providing support and helping researcher understand the variation present in all the original variables with small uncorrelated amount of variables, called components. The principal components analysis is very simple and frequently used in several areas. For its construction, the components are calculated through covariance matrix. However, the covariance matrix does not capture the autocorrelation information, wasting important information about data sets. In this research, we present some techniques related to principal component analysis, considering autocorrelation information. However, we explore the principal component analysis in the domain frequency, providing more accurate and detailed results than classical component analysis time series case. In subsequent method SSA (Singular Spectrum Analysis) and MSSA (Multichannel Singular Spectrum Analysis), we study the principal component analysis considering relationship between locations and time points. These techniques are broadly used for atmospheric data sets to identify important characteristics and patterns, such as tendency and periodicity.
|
17 |
Utilização de análise de componentes principais em séries temporais / Use of principal component analysis in time seriesSérgio Coichev Teixeira 12 April 2013 (has links)
Um dos principais objetivos da análise de componentes principais consiste em reduzir o número de variáveis observadas em um conjunto de variáveis não correlacionadas, fornecendo ao pesquisador subsídios para entender a variabilidade e a estrutura de correlação dos dados observados com uma menor quantidade de variáveis não correlacionadas chamadas de componentes principais. A técnica é muito simples e amplamente utilizada em diversos estudos de diferentes áreas. Para construção, medimos a relação linear entre as variáveis observadas pela matriz de covariância ou pela matriz de correlação. Entretanto, as matrizes de covariância e de correlação podem deixar de capturar importante informações para dados correlacionados sequencialmente no tempo, autocorrelacionados, desperdiçando parte importante dos dados para interpretação das componentes. Neste trabalho, estudamos a técnica de análise de componentes principais que torna possível a interpretação ou análise da estrutura de autocorrelação dos dados observados. Para isso, exploramos a técnica de análise de componentes principais para o domínio da frequência que fornece para dados autocorrelacionados um resultado mais específico e detalhado do que a técnica de componentes principais clássica. Pelos métodos SSA (Singular Spectrum Analysis) e MSSA (Multichannel Singular Spectrum Analysis), a análise de componentes principais é baseada na correlação no tempo e entre as diferentes variáveis observadas. Essas técnicas são muito utilizadas para dados atmosféricos na identificação de padrões, tais como tendência e periodicidade. / The main objective of principal component analysis (PCA) is to reduce the number of variables in a small uncorrelated data sets, providing support and helping researcher understand the variation present in all the original variables with small uncorrelated amount of variables, called components. The principal components analysis is very simple and frequently used in several areas. For its construction, the components are calculated through covariance matrix. However, the covariance matrix does not capture the autocorrelation information, wasting important information about data sets. In this research, we present some techniques related to principal component analysis, considering autocorrelation information. However, we explore the principal component analysis in the domain frequency, providing more accurate and detailed results than classical component analysis time series case. In subsequent method SSA (Singular Spectrum Analysis) and MSSA (Multichannel Singular Spectrum Analysis), we study the principal component analysis considering relationship between locations and time points. These techniques are broadly used for atmospheric data sets to identify important characteristics and patterns, such as tendency and periodicity.
|
18 |
Modélisation multi-échelle et hybride des maladies contagieuses : vers le développement de nouveaux outils de simulation pour contrôler les épidémies / Multi-scale-socio-environmental modeling of epidemiological process : a way for organizing humain environments and rhythms to control and prevent the spread of contagious diseasesHessami, Mohammad Hessam 23 June 2016 (has links)
Les études théoriques en épidémiologie utilisent principalement des équations différentielles pour étudier (voire tenter de prévoir) les processus infectieux liés aux maladies contagieuses, souvent sous des hypothèses peu réalistes (ex: des populations spatialement homogènes). Cependant ces modèles ne sont pas bien adaptés pour étudier les processus épidémiologiques à différentes échelles et ils ne sont pas efficaces pour prédire correctement les épidémies. De tels modèles devraient notamment être liés à la structure sociale et spatiale des populations. Dans cette thèse, nous proposons un ensemble de nouveaux modèles dans lesquels différents niveaux de spatialité (par exemple la structure locale de la population, en particulier la dynamique de groupe, la distribution spatiale des individus dans l'environnement, le rôle des personnes résistantes, etc.) sont pris en compte pour expliquer et prédire la façon dont des maladies transmissibles se développent et se répandent à différentes échelles, même à l'échelle de grandes populations. La manière dont les modèles que nous avons développé sont paramétrés leur permet en outre d'être reliés entre eux pour bien décrire en même temps le processus épidémiologique à grande échelle (population d'une grande ville, pays ...) mais avec précision dans des zones de surface limitée (immeubles de bureaux, des écoles). Nous sommes d'abord parvenus à inclure la notion de groupes dans des systèmes d'équations différentielles de modèles SIR (susceptibles, infectés, résistants) par une réécriture des dynamiques de population s'inspirant des réactions enzymatiques avec inhibition non compétitive : les groupes (une forme de complexe) se forment avec des compositions différentes en individus S, I et R, et les individus R se comportent ici comme des inhibiteurs non compétitifs. Nous avons ensuite couplé de tels modèles SIR avec la dynamique globale des groupes simulée par des algorithmes stochastiques dans un espace homogène, ou avec les dynamiques de groupe émergentes obtenues dans des systèmes multi-agents. Comme nos modèles fournissent de l'information bien détaillée à différentes échelles (c'est-à-dire une résolution microscopique en temps, en espace et en population), nous pouvons proposer une analyse de criticité des processus épidémiologiques. Nous pensons en effet que les maladies dans un environnement social et spatial donné présentent des signatures caractéristiques et que de telles mesures pourraient permettre l'identification des facteurs qui modifient leur dynamique.Nous visons ainsi à extraire l'essence des systèmes épidémiologiques réels en utilisant différents modèles mathématique et numériques. Comme nos modèles peuvent prendre en compte les comportements individuels et les dynamiques de population, ils sont en mesure d'utiliser des informations provenant du BigData, collectée par les technologies des réseaux mobiles et sociaux. Un objectif à long terme de ce travail est d'utiliser de tels modèles comme de nouveaux outils pour réduire les épidémies en guidant les rythmes et organisation humaines, par exemple en proposant de nouvelles architectures et en changeant les comportements pour limiter les propagations épidémiques. / Theoretical studies in epidemiology mainly use differential equations, often under unrealistic assumptions (e.g. spatially homogeneous populations), to study the development and spreading of contagious diseases. Such models are not, however, well adapted understanding epidemiological processes at different scales, nor are they efficient for correctly predicting epidemics. Yet, such models should be closely related to the social and spatial structure of populations. In the present thesis, we propose a series of new models in which different levels of spatiality (e.g. local structure of population, in particular group dynamics, spatial distribution of individuals in the environment, role of resistant people, etc) are taken into account, to explain and predict how communicable diseases develop and spread at different scales, even at the scale of large populations. Furthermore, the manner in which our models are parametrised allow them to be connected together so as to describe the epidemiological process at a large scale (population of a big town, country ...) and with accuracy in limited areas (office buildings, schools) at the same time.We first succeed in including the notion of groups in SIR (Susceptible, Infected, Recovered) differential equation systems by a rewriting of the SIR dynamics in the form of an enzymatic reaction in which group-complexes of different composition in S, I and R individuals form and where R people behave as non-competitive inhibitors. Then, global group dynamics simulated by stochastic algorithms in a homogeneous space, as well emerging ones obtained in multi-agent systems, are coupled to such SIR epidemic models. As our group-based models provide fine-grain information (i.e. microscopical resolution of time, space and population) we propose an analysis of criticality of epidemiological processes. We think that diseases in a given social and spatial environment present characteristic signatures and that such measurements could allow the identification of the factors that modify their dynamics.We aim here to extract the essence of real epidemiological systems by using various methods based on different computer-oriented approaches. As our models can take into account individual behaviours and group dynamics, they are able to use big-data information yielded from smart-phone technologies and social networks. As a long term objective derived from the present work, one can expect good predictions in the development of epidemics, but also a tool to reduce epidemics by guiding new environmental architectures and by changing human health-related behaviours.
|
19 |
TIREX : une représentation textuelle intermédiaire pour un environnement d'exécution virtuel, échanger des informations du compilateur et d'analyse du programme / TIREX : A textual target-level intermediate representation for virtual execution environment, compiler information exchange and program analysisPietrek, Artur 02 October 2012 (has links)
Certains environnements ont besoin de plusieurs compilateurs, par exemple un pour le système d'exploitation, supportant la norme C/C++ complète, et l'autre pour les applications, qui supporte éventuellement un sous-ensemble de la norme, mais capable de fournir plus de performance. Le maintien de plusieurs compilateurs pour une plateforme cible représente un effort considérable. Il est donc plus facile d'implémenter et de maintenir un seul outil responsable des optimisations particulières au processeur ciblé. Il nous faut alors un moyen de relier ces compilateurs à l'optimiseur, de préférence, en gardant au passage certaines structures de données internes aux compilateurs qui, soit prendraient du temps, soit seraient impossible à reconstruire à partir du code assembleur par exemple. Dans cette thèse, nous introduisons Tirex, une représentation textuelle intermédiaire pour échanger des informations de bas niveau, déjà dépendantes de la cible, entre les compilateurs, les optimiseurs et les autres outils de la chaîne de compilation. Notre représentation contient un flot d'instructions du processeur cible, mais garde également la structure explicite du programme et supporte la forme SSA (Static Single Assignment). Elle est facilement extensible et très flexible, ce qui permet de transmettre toute donnée jugée importante à l'optimiseur. Nous construisons Tirex par extension de MinIR, une représentation intermédiaire elle-même basée sur un encodage YAML des structures du compilateur. Nos extensions de Tirex comprennent: l'abaissement de la représentation au niveau du processeur cible, la conservation du flot de données du programme, ainsi que l'ajout d'informations sur les structures de boucles et les dépendances de données. Nous montrons que Tirex est polyvalent et peut être utilisé dans une variété d'applications différentes, comme par exemple un environnement d'exécution virtuel (VEE),et fournit une base forte pour un environnement d'analyse du programme. Dans le cadre d'un VEE, nous présentons un interprèteur de la forme SSA et un compilateur just-in-time (JIT). Nous montrons comment l'interprétation d'une représentation au niveau du processeur cible élimine la plupart des problèmes liés à l'exécution en mode mixte. Nous explorons également les questions liées à l'interprétation efficace d'une représentation de programme sous la forme SSA. / Some environments require several compilers, for instance one for the operating system, supporting the full C/C++ norm, and one for the applications, potentially supporting less but able to derive more performance. Maintaining different compilers for a target requires considerable effort, thus it is easier to implement and maintain target-dependent optimizations in a single, external tool. This requires a way of connecting these compilers with the target-dependent optimizer, preferably passing along some internal compiler data structures that would be time-consuming, difficult or even impossible to reconstruct from assembly language for instance. In this thesis we introduce Tirex, a Textual Intermediate Representation for EXchanging target-level information between compilers, optimizers an different tools in the compilation toolchain. Our intermediate representation contains an instruction stream of the target processor, but still keeps the explicit program structure and supports the SSA form(Static Single Assignment). It is easily extensible and highly flexible, which allows any data to be passed for the purpose of the optimizer. We build Tirex by extending the existing Minimalist Intermediate Representation (MinIR), itself expressed as a YAML textual encoding of compiler structures. Our extensions in Tirex include: lowering the representation to a target level, conserving the program data stream, adding loop scoped information and data dependencies. Tirex is currently produced by the Open64 and the LLVM compilers, with a GCC producer under work. It is consumed by the Linear Assembly Optimizer (LAO), a specialized, target-specific, code optimizer. We show that Tirex is versatile and can be used in a variety of different applications, such as a virtual execution environment (VEE), and provides strong basis for a program analysis framework. As part of the VEE, we present an interpreter for a Static Single Assignment (SSA) form and a just-in-time (JIT) compiler. We show how interpreting a target-level representation eliminates most of the complexities of mixed-mode execution. We also explore the issues related to efficiently interpreting a SSA form program representation.
|
20 |
Decoupled (SSA-based) register allocators : from theory to practice, coping with just-in-time compilation and embedded processors constraints / Allocation de registres découplée (basée sur la formulation SSA) : De la théorie à la pratique, faire face aux contraintes liées à la compilation juste à temps et aux processeurs embarquésColombet, Quentin 07 December 2012 (has links)
Ma thèse porte sur l’allocation de registres. Durant cette étape, le compilateur doit assigner les variables du code source, en nombre arbitrairement grand, aux registres physiques du processeur, en nombre limité k. Des travaux récents, notamment ceux des thèses de F. Bouchez et S. Hack, ont montré qu’il était possible de séparer de manière complètement découplée cette étape en deux phases : le vidage (spill) – stockage de variables en mémoire pour libérer des registres – suivi de l’assignation aux registres proprement dite. Ces travaux démontraient la faisabilité de ce découpage en s’appuyant sur un cadre théorique et certaines hypothèses simplificatrices. En particulier, il est suffisant de s’assurer qu’après le spill, le nombre de variables simultanément en vie est inférieur à k.Ma thèse fait suite à ces travaux en montrant comment appliquer ce type d’approche dans un cadre réaliste, en prenant en compte les contraintes liées à l’encodage des instructions, à l’ABI (application binary interface), aux bancs de registres avec aliasing. Différentes approches sont proposées qui permettent soit de s’affranchir des problèmes précités, soit de les prendre en compte directement dans le modèle théorique. Les hypothèses des modèles et les solutions proposées sont évaluées et validées par une étude expérimentale poussée dans le compilateur de STMicroelectronics. Enfin, l’ensemble de ces travaux a été réalisé avec, en ligne de mire, les contraintes de la compilation moderne, la compilation JIT (just-in-time), où rapidité et consommation mémoire du compilateur sont des facteurs déterminants. Nous nous sommes efforcés d’offrir des solutions satisfaisant ces critères ou améliorant les résultats attendus tant qu’un certain budget n’a pas été dépassé, exploitant en particulier la forme SSA (static single assignment) pour définir des algorithmes de type tree scan qui généralisent les approches de type linear scan, proposées pour le JIT. / My thesis deals with register allocation. During this phase, the compiler has to assign variables of the source program, in an arbitrary big number, to actual registers of the processor, in a limited number k. Recent works, for instance the thesis of F. Bouchez and S. Hack, have shown that it is possible to split in two different decoupled step this phase: the spill - store the variables into memory to release registers - followed by the registers assignment. These works demonstrate the feasibility of this decoupling relying on a theoretic framework and some assumptions. In particular, it is sufficient to ensure after the spill step that the number of variables simultaneously live is below k.My thesis follows these works by showing how to apply this kind of approach when real-world constraints come in play: instructions encoding, ABI (application binary interface), register aliasing. Different approaches are proposed. They allow either to ignore these problems or to directly tackle them into the theoretic framework. The hypothesis of the models and the proposed solutions are evaluated and validated using a thorough experimental study with the compiler of STMicroelectronics. Finally, all these works have been done with the constraints of modern compilers in mind, the JIT (just-in-time) compilation, where the compilation time et the memory footprint of the compiler are key factors. We strive to offer solutions that cope with these criteria or improve the result until a given budget is reached. We, in particular, used the SSA (static single assignment) form to define algorithm like tree scan that generalizes linear scan based approaches proposed for JIT compilation.
|
Page generated in 0.033 seconds