• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 6
  • 2
  • 1
  • 1
  • Tagged with
  • 11
  • 11
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

A new relative permeability model for compositional simulation of two and three phase flow

Yuan, Chengwu 10 February 2011 (has links)
Chemical treatments using solvents and surfactants can be used to increase the productivity of gas-condensate wells with condensate banks. CMG’s compositional simulator GEM was used to simulate such treatments to gain a better understanding of design questions such as how much treatment solution to inject and to predict the benefits of such treatments. GEM was used to simulate treatments in vertical wells with and without hydraulic fractures and also horizontal wells. However, like other commercial compositional simulators, the flash calculations used to predict the phase behavior is limited to two phases whereas a three-phase flash is needed to accurately model the complex phase behavior that occurs during and after the injection of treatment solutions. UTCOMP is a compositional simulator with three-phase flash routine and attempts were made to use it to simulate such well treatments. However, this is a very difficult problem to simulate and all previous attempts failed because of numerical problems caused by inconsistent phase labeling (so called phase flipping) and the discontinuities this causes in the relative permeability values. In this research, a new relative permeability model based on molar Gibbs free energy was developed, implemented in a compositional simulator and applied to several difficult three-phase flash problems. A new way of modeling the residual saturations was needed to ensure a continuous variation of the residual saturations from the three-phase region to the two-phase region or back and was included in the new model. The new relative permeability model was implemented in the compositional reservoir simulator UTCOMP. This new relative permeability model makes it is unnecessary to identify and track the phases. This method automatically avoids the previous phase flipping problems and thus is physically accurate as well as computationally faster due to the improved numerical performance. The new code was tested by running several difficult simulation problems including a CO2 flood with three-hydrocarbon phases and a water phase. A new framework for doing flash calculations was also developed and implemented in UTCOMP to account for the multiple roots of the cubic equation-of-state to ensure a global minimum in the Gibbs free energy by doing an exhaustive search for the minimum value for one, two and three phases. The purpose was to determine if the standard method using a Gibbs stability test followed by a flash calculation was in fact resulting in the true minimum in the Gibbs free energy. Test problems were run and the results of the standard algorithm and the exhaustive search algorithm compared. The updated UTCOMP simulator was used to understand the flow back of solvents injected in gas condensate wells as part of chemical treatments. The flow back of the solvents, a short-term process, affects how well the treatment works and has been an important design and performance question for years that could not be simulated correctly until now due to the limitations of both commercial simulators and UTCOMP. Different solvents and chase gases were simulated to gain insight into how to improve the design of the chemical treatments under different conditions. / text
2

Characterization of crystalline materials by rotation electron diffraction : Phase identification and structure determination

Yun, Yifeng January 2014 (has links)
Electron crystallography is powerful for determination of complex structures. The newly-developed 3D electron diffraction (ED) methods make structure determination from nano- and micron-sized crystals much easier than using other methods, for example X-ray diffraction. Almost complete 3D ED data can be collected easily and fast from crystals at any arbitrary orientations. Dynamical effects are largely reduced compared to zonal ED patterns. 3D ED is powerful for phase identification and structure solution from individual nano- and micron-sized crystals, while powder X-ray diffraction (PXRD) provides information from all phases present in the samples. 3D ED methods and PXRD are complementary and their combinations are promising for studying multiphasic samples and complicated crystal structures. In this thesis, the feasibility and capability of 3D ED methods, specifically rotation electron diffraction (RED), in phase identification and structure determination of different kinds of crystalline materials with nano- or submicrometer-sized crystals are investigated. Experimental conditions for RED data collection and data processing in relation to data quality, as well as the challenges in the applications of RED are discussed. RED was combined with PXRD to identify phases from as-synthesized samples and to characterize atomic structures of eleven crystalline compounds. It was shown to be possible to identify as many as four distinct compounds within one sample containing submicron-sized crystals in a Ni-Se-O-Cl system. RED was also used to determine unit cell and symmetry of isoreticular metal-organic frameworks (SUMOF-7) and solve five zeolite structures with new frameworks, ITQ-51, ITQ-53, ITQ-54, EMM-23 and EMM-25 and that of a metal-organic framework (MOF), SUMOF-7I. The structure of an open-framework germanate SU-77 was solved by combining RED with PXRD. The structures of the zeolites and SU-77 were confirmed by Rietveld refinement against PXRD. High-resolution transmission electron microscopy was used to confirm the structure models of ITQ-51, EMM-25 and SUMOF-7I. / <p>At the time of the doctoral defense, the following papers were unpublished and had a status as follows: Paper 2: Submitted. Paper 4: Accepted. Paper 6: Manuscript. Paper 7: Epub ahead of print. Paper 9: Manuscript. Paper 11: Manuscript.</p>
3

Design of Distribution Transformer Management System to Support Demand Response for Smart Grids

Ku, Te-Tien 03 September 2012 (has links)
In this dissertation, the transformer management system has been developed to monitor transformer over loading and generate warning message in conduit mapping management system (CMMS) of Taipower company. The transformer over loading prediction is performed by both offline and online modes. Performs the transformer loading estimation by using the customer monthly energy consumption in customer information system (CIS) and the connectivity attributes of transformer and customers served in CMMS system of Taipower company. The daily load curve of distribution transformer is derived considering the typical daily load patterns which have been developed in load survey study. The warning message will be generated when the peak loading estimated is lager then the transformer rated capacity. To enhance the accuracy of transformer attributes in CMMS system, the transformer phasing measurement system (TPMS) and the connectivity identification system to identify all of the customers served by each transformer are developed. It is difficult to receive the 1 pulse per second signal form global positioning system for timing synchronization of TPMS measuring units for phasing measurement of transformers located in basement, the temperature compensated crystal oscillation with Fuzzy calibration algorithm is used to maintain the timing synchronization within 10o deviation for measurement period of 2 hours. To solve the incorrect problem of transformer and customer connectivity in CMMS, the power line carrier technology is applied in the design of connectivity measurement system for the identification of customers served by the transformer. The peak loading of transformer is estimated by including the temperature effect and the overloading flag of transformer is displayed on the CMMS automatic mapping system. For the online TLM system, the embedded transformer terminal unit is developed for the real time measurement of transformer loading and insulation oil temperature. For the transformer with abnormal operation condition, the alarm signals will be generated and transmitted to the TLM master station via hybrid communication system for the activation of demand response function to execute the load shedding control of customer loads.
4

System Profiling and Green Capabilities for Large Scale and Distributed Infrastructures

Tsafack Chetsa, Ghislain Landry 03 December 2013 (has links) (PDF)
Nowadays, reducing the energy consumption of large scale and distributed infrastructures has truly become a challenge for both industry and academia. This is corroborated by the many efforts aiming to reduce the energy consumption of those systems. Initiatives for reducing the energy consumption of large scale and distributed infrastructures can without loss of generality be broken into hardware and software initiatives.Unlike their hardware counterpart, software solutions to the energy reduction problem in large scale and distributed infrastructures hardly result in real deployments. At the one hand, this can be justified by the fact that they are application oriented. At the other hand, their failure can be attributed to their complex nature which often requires vast technical knowledge behind proposed solutions and/or thorough understanding of applications at hand. This restricts their use to a limited number of experts, because users usually lack adequate skills. In addition, although subsystems including the memory are becoming more and more power hungry, current software energy reduction techniques fail to take them into account. This thesis proposes a methodology for reducing the energy consumption of large scale and distributed infrastructures. Broken into three steps known as (i) phase identification, (ii) phase characterization, and (iii) phase identification and system reconfiguration; our methodology abstracts away from any individual applications as it focuses on the infrastructure, which it analyses the runtime behaviour and takes reconfiguration decisions accordingly.The proposed methodology is implemented and evaluated in high performance computing (HPC) clusters of varied sizes through a Multi-Resource Energy Efficient Framework (MREEF). MREEF implements the proposed energy reduction methodology so as to leave users with the choice of implementing their own system reconfiguration decisions depending on their needs. Experimental results show that our methodology reduces the energy consumption of the overall infrastructure of up to 24% with less than 7% performance degradation. By taking into account all subsystems, our experiments demonstrate that the energy reduction problem in large scale and distributed infrastructures can benefit from more than "the traditional" processor frequency scaling. Experiments in clusters of varied sizes demonstrate that MREEF and therefore our methodology can easily be extended to a large number of energy aware clusters. The extension of MREEF to virtualized environments like cloud shows that the proposed methodology goes beyond HPC systems and can be used in many other computing environments.
5

Técnicas computacionais inteligentes para a inferência de estado e a otimização de cultivos de Streptococcus pneumoniae

Horta, Antonio Carlos Luperni 27 March 2008 (has links)
Made available in DSpace on 2016-08-17T18:39:30Z (GMT). No. of bitstreams: 1 2145.pdf: 2206472 bytes, checksum: 5295597725f34bdf5560d6cda8af7446 (MD5) Previous issue date: 2008-03-27 / Financiadora de Estudos e Projetos / Streptococcus pneumoniae (pneumococo) is a pathogenic bacterium that causes several infections which are aggravated by the increase of serotypes with antibiotics resistance. The development of an effective vaccine against this pathogen is crucial for the prevention of the neumococcal illnesses. Conjugated vaccines, consisting of the capsular polysaccharide joined to a carrier protein, are more efficient in the stimulation of the immunologic memory. The capsular polysaccharide (PS) is present in the capsule that involves the cell. Thus, the conjugated vaccine elaboration involves bacterial cells cultivation for its production. As the organism is cultivated in the oxygen absence, the lactate production is inevitably high, leading to growth inhibition due to lactate accumulation in the medium. To minimize the inhibitory effects of the lactate accumulation and to increase the PS production it is necessary to monitor the process and adequately control the addition of supplementary medium along with the withdrawal of saturated medium. This kind of operation can be performed by carrying out a fed-bath cultivation in a bioreactor connected to a perfusion system. The success on the monitoring, control and optimization of this bioprocess depends on the efficiency of the modeling and simulation resources employed. This research work considers the uses intelligent computational techniques, specifically the technique of heuristical search called simulated annealing (SA) combined with neural networks for the state inference and the optimization of S. pneumoniae cultivations. The proposal was implemented as a computational system that: a) uses the SA for the identification of the values for a set of parameters associated to unstructured models and; b) uses neural networks (individually and grouped as a committee) for the state inference of a culture. The work presents and discusses the results of the system for data sets experimentally obtained and highlights the importance of the proposal for achieving a higher efficiency in the culture control processes. / Streptococcus pneumoniae (pneumococo) é uma bactéria patogênica causadora de várias infecções que são agravadas pelo aumento de cepas com resistência aos antibióticos. O desenvolvimento de uma vacina efetiva contra este patógeno é crucial para a prevenção das doenças pneumocócicas. Vacinas conjugadas, constituídas pelo polissacarídeo capsular ligado a uma proteína carregadora, são mais eficientes no estímulo da memória imunológica. O polissacarídeo capsular (PS) está presente na cápsula que envolve a célula e, desta forma, a elaboração de vacinas conjugadas envolve o cultivo da bactéria para a produção do mesmo. Como o microrganismo é cultivado na ausência de oxigênio, a produção de lactato é inevitavelmente elevada e o seu acúmulo no meio provoca a inibição do crescimento. Para minimizar os efeitos inibitórios da acumulação de lactato e aumentar a produção de PS é necessário monitorar o processo e controlar adequadamente a adição de meio suplementar e a retirada de meio saturado em cultivos operados em batelada alimentada, utilizando biorreatores acoplados a sistema de perfusão. O sucesso no monitoramento, no controle e na otimização deste bioprocesso depende da utilização de recursos de modelagem e de simulação que sejam eficientes. Este trabalho de pesquisa propõe o uso de técnicas computacionais inteligentes, especificamente a técnica de busca heurística chamada de simulated annealing (SA) aliada a redes neurais, para a inferência de estado e a otimização de cultivos de S. pneumoniae. A proposta foi concretizada via desenvolvimento de um sistema computacional que: a) faz uso do SA para a identificação do conjunto de valores de parâmetros associados a modelos não estruturados e; b) usa redes neurais (individualmente e em regime de comitê) para a inferência de estado de um cultivo. O trabalho apresenta e discute os resultados do sistema em conjuntos de dados obtidos experimentalmente e evidencia a importância da proposta para uma maior eficiência no controle de processos de cultivo.
6

Sistema automático de supervisão e controle de cultivos de alta densidade celular de E. coli recombinante

Horta, Antonio Carlos Luperni 22 December 2011 (has links)
Made available in DSpace on 2016-06-02T19:55:31Z (GMT). No. of bitstreams: 1 4085.pdf: 6460777 bytes, checksum: 9367318799fc091b43ee6716e5057271 (MD5) Previous issue date: 2011-12-22 / Financiadora de Estudos e Projetos / High cell density cultivations of recombinant E. coli are a fast and economical way to produce recombinant proteins. Through this bioprocess, products with high added value and pharmaceuticals of great importance such as insulin, human and bovine growth hormone, protein antigens for formulation of vaccines, enzymes, among others, are obtained. However, keeping these cultivations within the desired conditions becomes a major challenge, since some variables such as dissolved oxygen concentration (DOC) and substrate concentration are difficult to control. Therefore, the development and implementation of an automatic monitoring and control tool are key requirements for the performance of high density cultivation. The present work has as main objectives to study feeding strategies for high cell density cultivation of recombinant Escherichia coli and develop a computational tool capable of ensuring the implementation of the chosen strategies, performing the monitoring, control and supervision of the cultivations. Fed batch cultivations were carried out under the supervision of the tool in a 5 L in-house bioreactor, equipped with sensors for temperature, dissolved oxygen, pH, pressure and biomass (sensor that measures the concentration of viable cells based on permittivity measurements), peristaltic pumps and connected to the gas analyzer. The tool was developed with LabView 8.0 and MatLab 6.5, being the acquisition and communication with the different bioreactor accessories via compact Field Point. Twenty two fed-batch cultivations with 5 different clones of E. coli, BL21(D3) expressing the enzyme penicillin G acylase (PGA) as well as antigenic proteins of S. pneumoniae (PspA3, PspA245 and PspA4Pro) and E. rhusiopathiae (SpaA) were performed during the development of the tool and the studies of feeding strategy. Both defined medium (HDF modified) as complex medium (ZYM-5052 modified), usually having glycerol as main carbon source and IPTG or lactose as inducers were used. In all cultivations, samples were collected to quantify the concentration of cells (dry weight method in filter of 0.22 &#61676;m and optical density at 600 nm), organic acids, glucose, glycerol and lactose (HPLC) as well as protein expression (densitometry and NIPAB method for PGA) and plasmid stability (plating). The tool SUPERSYS_HCDCR (registered as a free software) developed, implemented and validated in the performed cultivations, carries out the basic functions of bioreactor supervision software, such as monitoring and data acquisition of pressure, temperature, pH, DOC, fraction of CO2 and O2 in the outlet gas as well as real-time estimate of the respiratory quotient, the rate of oxygen consumption and CO2 production. However, it also has the following special features, including: i) automatic control of air and oxygen flow according to cellular demand, ii) automatic activation of the feed pump at the end of the batch; iii) automatic control of feeding flow rate as function of the specific growth rate inferred in real time; iv) automatic control of feeding flow rate constrained by the concentration of dissolved oxygen, v) audible alarms indicating failures in the process; vi) failure messages sent via email; vii) automatic control of dissolved oxygen concentration; viii) control of the bioreactor pressure; and ix) control of bath temperature. Regarding the studies of feeding strategies aimed at biomass productivity increase in high cell density cultivations of recombinant E. coli, using the supervision tool developed together with changes in the composition of the synthetic culture medium available in the literature, a cellular concentrations greater than 150 g/L was achieved in less than 24 hours of cultivation, corresponding to a productivity of 9.2 g/Lh. This value, which is higher than the reported in the literature, was obtained without acetate accumulation and allowing high production of recombinant protein. / Cultivos de alta densidade celular de E. coli recombinante constituem uma tecnica rapida e economica para producao de proteinas recombinantes. Por meio deste bioprocesso, sao obtidos produtos de alto valor agregado e de grande importancia na industria farmaceutica, tais como insulina, hormonios de crescimento humano e bovino, antigenos proteicos para formulacao de vacinas, enzimas, dentre outros. Entretanto, manter estes cultivos dentro das condicoes desejadas se torna um grande desafio, em funcao da dificuldade de controlar variaveis como a concentracao de oxigenio dissolvido (COD) e a concentracao de substrato nos niveis desejados. Por isso, o desenvolvimento e a implementacao de sistemas automaticos de supervisao e controle sao requisitos fundamentais para o bom desempenho de um cultivo de alta densidade. O presente trabalho teve como principais objetivos estudar estrategias de alimentacao para cultivos de alta densidade celular de Escherichia coli recombinante e desenvolver uma ferramenta computacional para suporte na execucao das estrategias escolhidas, realizando o monitoramento, controle e supervisao dos cultivos. Os cultivos em batelada alimentada realizados sob supervisao da ferramenta foram conduzidos em biorreator de 5 L, equipado com sensores de temperatura, oxigenio dissolvido, pH, pressao e biomassa (sensor que mede a concentracao de celulas viaveis a partir dos dados de permissividade), bombas peristalticas e conectado a analisador de gases. A ferramenta foi desenvolvida com os programas LabView 8.0 e MatLab 6.5, sendo a aquisicao e a comunicacao com os diferentes acessorios do biorreator realizada via compact Field Point (National Instruments). Vinte e dois cultivos em batelada alimentada com 5 diferentes clones de E. coli, BL21(D3) expressando a enzima penicilina G acilase (PGA) assim como proteinas antigenicas de Streptococcus pneumoniae (PspA3, PspA245 e PspA4Pro) e de Erysipelothrix rhusiopathiae (SpaA) foram realizados durante o desenvolvimento da ferramenta e dos estudos de estrategia de alimentacao, empregando tanto meio definido (HDF modificado) como meio complexo (ZYM-5052 modificado), tendo glicerol ou glicose como principal fonte de carbono e IPTG ou lactose como indutores. Em todos os cultivos, amostras foram coletadas para quantificar a concentracao de celulas (metodo de massa seca em filtro de 0,22&#61676;m e leitura da densidade otica a 600 nm), de acidos organicos, glicose, glicerol e lactose (HPLC) e a expressao da proteina (densitometria e metodo NIPAB para a PGA) e a estabilidade de plasmideo (plaqueamento). A ferramenta SUPERSYS_HCDCR (registrada como software livre) desenvolvida, implementada e validada nos cultivos realizados, desempenha as funcoes basicas de softwares de supervisao de biorreatores, tais como: monitoramento e aquisicao de dados de pressao, temperatura, pH, COD, fracao de CO2 e de O2 nos gases de saida; estimativa em tempo real do quociente respiratorio, das velocidades de consumo de oxigenio e de producao de CO2. Esta ferramenta apresenta as seguintes funcionalidades especiais: i) controle automatico das vazoes de ar e de oxigenio de acordo com a demanda celular; ii) acionamento automatico da bomba de alimentacao ao final da batelada; iii) controle automatico da vazao de alimentacao em funcao da velocidade especifica de crescimento inferida em tempo real; iv) controle automatico da alimentacao com restricoes pela concentracao de oxigenio dissolvido; v) alarmes sonoros indicando falhas no processo; vi) envio de mensagens de falhas por email; vii) controle automatico da concentracao de oxigenio dissolvido; viii) controle de seguranca da pressao do biorreator, e ix) controle da temperatura do banho. Em relacao aos estudos das estrategias de alimentacao visando ao aumento da produtividade em biomassa em cultivos de alta densidade celular de E. coli recombinante, com o auxilio da ferramenta de supervisao desenvolvida aliada a modificacoes na composicao do meio de cultivo sintetico disponivel na literatura, foram alcancadas concentracoes celulares maiores que 150 g/L em menos de 24 h de tempo total de cultivo, levando a uma produtividade de 9,2 g/Lh, a qual e superior aos valores relatados na literatura, sem acumulo de acetato e possibilitando elevada producao da proteina recombinante.
7

Studies In Automatic Management Of Storage Systems

Pipada, Pankaj 06 1900 (has links) (PDF)
Autonomic management is important in storage systems and the space of autonomics in storage systems is vast. Such autonomic management systems can employ a variety of techniques depending upon the specific problem. In this thesis, we first take an algorithmic approach towards reliability enhancement and then we use learning along with a reactive framework to facilitate storage optimization for applications. We study how the reliability of non-repairable systems can be improved through automatic reconfiguration of their XOR-coded structure. To this regard we propose to increase the fault tolerance of non-repairable systems by reorganizing the system, after a failure is detected, to a new XOR-code with a better fault tolerance. As errors can manifest during reorganization due to whole reads of multiple submodules, our framework takes them in to account and models such errors as based on access intensity (ie.BER-biterrorrate). We present and evaluate the reliability of an example storage system with and without reorganization. Motivated by the critical need for automating various aspects of data management in virtualized data centers, we study the specific problem of automatically implementing Virtual Machine (VM) migration in a dynamic environment according to some pre-set policies. This is a problem that requires automated identification of various workloads and their execution environments running inside virtual machines in a non-intrusive manner. To this end we propose AuM (for Autonomous Manager) that has the capability to learn workloads by aggregating variety of information obtained from network traces of storage protocols. We use state of the art Machine Learning tools, namely Multiple Kernel learning ,to aggregate information and show that AuM is indeed very accurate in identifying work loads, their execution environments and is also successful in following user set policies very closely for the VM migration tasks. Storage infrastructure in large-scale cloud data center environments must support applications with diverse, time-varying data access patterns while observing the quality of service. To meet service level requirements in such heterogeneous application phases, storage management needs to be phase-aware and adaptive ,i.e. ,identify specific storage access patterns of applications as they occur and customize their handling accordingly. We build LoadIQ, an online application phase detector for networked (file and block) storage systems. In a live deployment , LoadIQ analyzes traces and emits phase labels learnt online. Such labels could be used to generate alerts or to trigger phase-specific system tuning.
8

GENERAL AVIATION AIRCRAFT FLIGHT STATUS IDENTIFICATION FRAMEWORK

Qilei Zhang (18284122) 01 April 2024 (has links)
<p dir="ltr">The absence or limited availability of operational statistics at general aviation airports restricts airport managers and operators from assessing comprehensive operational data. The traditional manual compilation of operational statistics is labor-intensive and lacks the depth and accuracy to depict a holistic picture of a general aviation airport’s operations. This research developed a reliable and efficient approach to address the problem by providing a comprehensive and versatile flight status identification framework. </p><p dir="ltr">Leveraging the BlueSky flight simulation module, the research can generate a synthetic flight database to emulate real-world general aviation aircraft’s flight scenarios. Two neural network architectures, namely, an RNN-GAN network and a refined Seq2Seq network, were explored to examine their capability to reconstruct flight trajectories. The Seq2Seq network, which demonstrated better performance, was further employed to estimate the simulated aircraft’s different metrics, such as internal mechanical metrics and flight phase. Additionally, this research undertook an array of diverse tailored evaluation techniques to assess the efficacy of flight status predictions and conducted comparative analyses between various configurations. </p><p dir="ltr">Furthermore, the research concluded by discussing the future development of the framework, emphasizing its potential for generalization across various flight data applications and scenarios. The enhanced methodology for collecting operational statistics and the analysis tool will enable airport managers and regulators to better receive a comprehensive view of the airport’s operations, facilitating airport planning and development.</p>
9

System Profiling and Green Capabilities for Large Scale and Distributed Infrastructures / Profilage système et leviers verts pour les infrastructures distribuées à grande échelle

Tsafack Chetsa, Ghislain Landry 03 December 2013 (has links)
De nos jours, réduire la consommation énergétique des infrastructures de calcul à grande échelle est devenu un véritable challenge aussi bien dans le monde académique qu’industriel. Ceci est justifié par les nombreux efforts visant à réduire la consommation énergétique de ceux-ci. Ces efforts peuvent sans nuire à la généralité être divisés en deux groupes : les approches matérielles et les approches logicielles.Contrairement aux approches matérielles, les approches logicielles connaissent très peu de succès à cause de leurs complexités. En effet, elles se focalisent sur les applications et requièrent souvent une très bonne compréhension des solutions proposées et/ou de l’application considérée. Ce fait restreint leur utilisation à un nombre limité d’experts puisqu’en général les utilisateurs n’ont pas les compétences nécessaires à leurs implémentation. Aussi, les solutions actuelles en plus de leurs complexités de déploiement ne prennent en compte que le processeur alors que les composants tel que la mémoire, le stockage et le réseau sont eux aussi de gros consommateurs d’énergie. Cette thèse propose une méthodologie de réduction de la consommation énergétique des infrastructures de calcul à grande échelle. Elaborée en trois étapes à savoir : (i) détection de phases, (ii) caractérisation de phases détectées et (iii) identification de phases et reconfiguration du système ; elle s’abstrait de toute application en se focalisant sur l’infrastructure dont elle analyse le comportement au cours de son fonctionnement afin de prendre des décisions de reconfiguration.La méthodologie proposée est implémentée et évaluée sur des grappes de calcul à haute performance de tailles variées par le biais de MREEF (Multi-Resource Energy Efficient Framework). MREEF implémente la méthodologie de réduction énergétique de manière à permettre aux utilisateurs d’implémenter leurs propres mécanismes de reconfiguration du système en fonction des besoins. Les résultats expérimentaux montrent que la méthodologie proposée réduit la consommation énergétique de 24% pour seulement une perte de performance de moins de 7%. Ils montrent aussi que pour réduire la consommation énergétique des systèmes, on peut s’appuyer sur les sous-systèmes tels que les sous-systèmes de stockage et de communication. Nos validations montrent que notre méthodologie s’étend facilement à un grand nombre de grappes de calcul sensibles à l’énergie (energy aware). L’extension de MREEF dans les environnements virtualisés tel que le cloud montre que la méthodologie proposée peut être utilisée dans beaucoup d’autres environnements de calcul. / Nowadays, reducing the energy consumption of large scale and distributed infrastructures has truly become a challenge for both industry and academia. This is corroborated by the many efforts aiming to reduce the energy consumption of those systems. Initiatives for reducing the energy consumption of large scale and distributed infrastructures can without loss of generality be broken into hardware and software initiatives.Unlike their hardware counterpart, software solutions to the energy reduction problem in large scale and distributed infrastructures hardly result in real deployments. At the one hand, this can be justified by the fact that they are application oriented. At the other hand, their failure can be attributed to their complex nature which often requires vast technical knowledge behind proposed solutions and/or thorough understanding of applications at hand. This restricts their use to a limited number of experts, because users usually lack adequate skills. In addition, although subsystems including the memory are becoming more and more power hungry, current software energy reduction techniques fail to take them into account. This thesis proposes a methodology for reducing the energy consumption of large scale and distributed infrastructures. Broken into three steps known as (i) phase identification, (ii) phase characterization, and (iii) phase identification and system reconfiguration; our methodology abstracts away from any individual applications as it focuses on the infrastructure, which it analyses the runtime behaviour and takes reconfiguration decisions accordingly.The proposed methodology is implemented and evaluated in high performance computing (HPC) clusters of varied sizes through a Multi-Resource Energy Efficient Framework (MREEF). MREEF implements the proposed energy reduction methodology so as to leave users with the choice of implementing their own system reconfiguration decisions depending on their needs. Experimental results show that our methodology reduces the energy consumption of the overall infrastructure of up to 24% with less than 7% performance degradation. By taking into account all subsystems, our experiments demonstrate that the energy reduction problem in large scale and distributed infrastructures can benefit from more than “the traditional” processor frequency scaling. Experiments in clusters of varied sizes demonstrate that MREEF and therefore our methodology can easily be extended to a large number of energy aware clusters. The extension of MREEF to virtualized environments like cloud shows that the proposed methodology goes beyond HPC systems and can be used in many other computing environments.
10

Structural and magnetic properties of ultrathin Fe3O4 films: cation- and lattice-site-selective studies by synchrotron radiation-based techniques

Pohlmann, Tobias 19 August 2021 (has links)
This work investigates the growth dynamic of the reactive molecular beam epitaxy of Fe3O4 films, and its impact on the cation distribution as well as on the magnetic and structural properties at the surface and the interfaces. In order to study the structure and composition of Fe3O4 films during growth, time-resolved high-energy x-ray diffraction (tr-HEXRD) and time-resolved hard x-ray photoelectron spectroscopy (tr-HAXPES) measurements are used to monitor the deposition process of Fe3O4 ultrathin films on SrTiO3(001), MgO(001) and NiO/MgO(001). For Fe3O4\SrTiO3(001) is found that the film first grows in a disordered island structure, between thicknesses of 1.5nm to 3nm in FeO islands and finally in the inverse spinel structure of Fe3O4, displaying (111) nanofacets on the surface. The films on MgO(001) and NiO/MgO(001) show a similar result, with the exception that the films are not disordered in the early growth stage, but form islands which immediately exhibit a crystalline FeO phase up to a thickness of 1nm. After that, the films grown in the inverse spinel structure on both MgO(001) and NiO/MgO(001). Additionally, the tr-HAXPES measurements of Fe3O4/SrTiO3(001) demonstrate that the FeO phase is only stable during the deposition process, but turns into a Fe3O4 phase when the deposition is interrupted. This suggests that this FeO layer is a strictly dynamic property of the growth process, and might not be retained in the as-grown films. In order to characterize the as-grown films, a technique is introduced to extract the cation depth distribution of Fe3O4 films from magnetooptical depth profiles obtained by fitting x-ray resonant magnetic reflectivity (XRMR) curves. To this end, x-ray absorption (XAS) and x-ray magnetic circular dichroism (XMCD) spectra are recorded as well as XRMR curves to obtain magnetooptical depth profiles. To attribute these magnetooptical depth profiles to the depth distribution of the cations, multiplet calculations are fitted to the XMCD data. From these calculations, the cation contributions at the three resonant energies of the XMCD spectrum can be evaluated. Recording XRMR curves at those energies allows to resolve the magnetooptical depth profiles of the three iron cation species in Fe3O4. This technique is used to resolve the cation stoichiometry at the surface of Fe3O4/MgO(001) films and at the interfaces of Fe3O4/MgO(001) and Fe3O4/NiO. The first unit cell of the Fe3O4(001) surface shows an excess of Fe3+ cations, likely related to a subsurface cation-vacancy reconstruction of the Fe3O4(001) surface, but the magnetic order of the different cation species appears to be not disturbed in this reconstructed layer. Beyond this layer, the magnetic order of all three iron cation species in Fe3O4/MgO(001) is stable for the entire film with no interlayer or magnetic dead layer at the interface. For Fe3O4/NiO films, we unexpectedly observe a magnetooptical absorption at the Ni L3 edge in the NiO film corresponding to a ferromagnetic order throughout the entire NiO film, which is antiferromagnetic in the bulk. Additionally, the magnetooptical profiles indicate a single intermixed layer containing both Fe2+ and Ni2+ cations.

Page generated in 0.5183 seconds