• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 46
  • 8
  • 7
  • 6
  • 6
  • 4
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 106
  • 27
  • 14
  • 14
  • 13
  • 13
  • 12
  • 12
  • 12
  • 11
  • 10
  • 10
  • 9
  • 9
  • 9
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
61

Placement autonomique de machines virtuelles sur un système de stockage hybride dans un cloud IaaS / Autonomic virtual machines placement on hybrid storage system in IaaS cloud

Ouarnoughi, Hamza 03 July 2017 (has links)
Les opérateurs de cloud IaaS (Infrastructure as a Service) proposent à leurs clients des ressources virtualisées (CPU, stockage et réseau) sous forme de machines virtuelles (VM). L’explosion du marché du cloud les a contraints à optimiser très finement l’utilisation de leurs centres de données afin de proposer des services attractifs à moindre coût. En plus des investissements liés à l’achat des infrastructures et de leur coût d’utilisation, la consommation énergétique apparaît comme un point de dépense important (2% de la consommation mondiale) et en constante augmentation. Sa maîtrise représente pour ces opérateurs un levier très intéressant à exploiter. D’un point de vue technique, le contrôle de la consommation énergétique s’appuie essentiellement sur les méthodes de consolidation. Or la plupart d'entre elles ne prennent en compte que l’utilisation CPU des machines physiques (PM) pour le placement de VM. En effet, des études récentes ont montré que les systèmes de stockage et les E/S disque constituent une part considérable de la consommation énergétique d’un centre de données (entre 14% et 40%). Dans cette thèse nous introduisons un nouveau modèle autonomique d’optimisation de placement de VM inspiré de MAPE-K (Monitor, Analyze, Plan, Execute, Knowledge), et prenant en compte en plus du CPU, les E/S des VM ainsi que les systèmes de stockage associés. Ainsi, notre première contribution est relative au développement d’un outil de trace des E/S de VM multi-niveaux. Les traces collectées alimentent, dans l’étape Analyze, un modèle de coût étendu dont l’originalité consiste à prendre en compte le profil d’accès des VM, les caractéristiques du système de stockage, ainsi que les contraintes économiques de l’environnement cloud. Nous analysons par ailleurs les caractéristiques des deux principales classes de stockage, pour aboutir à un modèle hybride exploitant au mieux les avantages de chacune. En effet, les disques durs magnétiques (HDD) sont des supports de stockage à la fois énergivores et peu performants comparés aux unités de calcul. Néanmoins, leur prix par gigaoctet et leur longévité peuvent jouer en leur faveur. Contrairement aux HDD, les disques SSD à base de mémoire flash sont plus performants et consomment peu d’énergie. Leur prix élevé par gigaoctet et leur courte durée de vie (comparés aux HDD) représentent leurs contraintes majeures. L’étape Plan a donné lieu, d’une part, à une extension de l'outil de simulation CloudSim pour la prise en compte des E/S des VM, du caractère hybride du système de stockage, ainsi que la mise en oeuvre du modèle de coût proposé dans l'étape Analyze. Nous avons proposé d’autre part, plusieurs heuristiques se basant sur notre modèle de coût et que nous avons intégrées dans CloudSim. Nous montrons finalement que notre approche permet d’améliorer d’un facteur trois le coût de placement de VM obtenu par les approches existantes. / IaaS cloud providers offer virtualized resources (CPU, storage, and network) as Virtual Machines(VM). The growth and highly competitive nature of this economy has compelled them to optimize the use of their data centers, in order to offer attractive services at a lower cost. In addition to investments related to infrastructure purchase and cost of use, energy efficiency is a major point of expenditure (2% of world consumption) and is constantly increasing. Its control represents a vital opportunity. From a technical point of view, the control of energy consumption is mainly based on consolidation approaches. These approaches, which exclusively take into account the CPU use of physical machines (PM) for the VM placement, present however many drawbacks. Indeed, recent studies have shown that storage systems and disk I/O represent a significant part of the data center energy consumption (between 14% and 40%).In this thesis we propose a new autonomic model for VM placement optimization based on MAPEK (Monitor, Analyze, Plan, Execute, Knowledge) whereby in addition to CPU, VM I/O and related storage systems are considered. Our first contribution proposes a multilevel VM I/O tracer which overcomes the limitations of existing I/O monitoring tools. In the Analyze step, the collected I/O traces are introduced in a cost model which takes into account the VM I/O profile, the storage system characteristics, and the cloud environment constraints. We also analyze the complementarity between the two main storage classes, resulting in a hybrid storage model exploiting the advantages of each. Indeed, Hard Disk Drives (HDD) represent energy-intensive and inefficient devices compared to compute units. However, their low cost per gigabyte and their long lifetime may constitute positive arguments. Unlike HDD, flash-based Solid-State Disks (SSD) are more efficient and consume less power, but their high cost per gigabyte and their short lifetime (compared to HDD) represent major constraints. The Plan phase has initially resulted in an extension of CloudSim to take into account VM I/O, the hybrid nature of the storage system, as well as the implementation of the previously proposed cost model. Secondly, we proposed several heuristics based on our cost model, integrated and evaluated using CloudSim. Finally, we showed that our contribution improves existing approaches of VM placement optimization by a factor of three.
62

Nástroj pro podporu kontroly a podávání zpráv o stavu projektu / Tool for Monitoring and Reporting on Project Progress

Straka, Tomáš January 2012 (has links)
This term project deals with knowledge areas of project management, specifically areas of time, cost and communication. This is a theoretical basis for the next implementation of tool for monitoring and reporting on project progress.
63

A high-throughput in-memory index, durable on flash-based SSD: Insights into the winning solution of the SIGMOD programming contest 2011

Kissinger, Thomas, Schlegel, Benjamin, Böhm, Matthias, Habich, Dirk, Lehner, Wolfgang January 2012 (has links)
Growing memory capacities and the increasing number of cores on modern hardware enforces the design of new in-memory indexing structures that reduce the number of memory transfers and minimizes the need for locking to allow massive parallel access. However, most applications depend on hard durability constraints requiring a persistent medium like SSDs, which shorten the latency and throughput gap between main memory and hard disks. In this paper, we present our winning solution of the SIGMOD Programming Contest 2011. It consists of an in-memory indexing structure that provides a balanced read/write performance as well as non-blocking reads and single-lock writes. Complementary to this index, we describe an SSD-optimized logging approach to fit hard durability requirements at a high throughput rate.
64

Modelling the risk of underfunding in ALM models

Alwohaibi, Maram January 2017 (has links)
Asset and Liability Management (ALM) models have become well established decision tools for pension funds. ALMs are commonly modelled as multi-stage, in which a large terminal wealth is required, while at intermediate time periods, constraints on the funding ratio, that is, the ratio of assets to liabilities, are imposed. Underfunding occurs when the funding ratio is too low; a target value for funding ratios is pre-specified by the decision maker. The risk of underfunding has been usually modelled by employing established risk measures; this controls one single aspect of the funding ratio distributions. For example, controlling the expected shortfall below the target has limited power in controlling shortfall under worst-case scenarios. We propose ALM models in which the risk of underfunding is modelled based on the concept of Second Order Stochastic Dominance (SSD). This is a criterion of ranking random variables - in our case funding ratios - that takes the entire distributions of interest into account and works under the widely accepted assumptions of decision makers being rational and risk averse. In the proposed SSD models, investment decisions are taken such that the resulting short-term distribution of the funding ratio is non-dominated with respect to SSD, while a constraint is imposed on the expected terminal wealth. This is done by considering progressively larger tails of the funding ratio distribution and considering target levels for them; a target distribution is thus implied. Different target distributions lead to different SSD efficient solutions. Improved distributions of funding ratios may be thus achieved, compared to the existing risk models for ALM. This is the first contribution of this thesis. Interesting results are obtained in the special case when the target distribution is deterministic, specified by one single outcome. In this case, we can obtain equivalent risk minimisation models, with risk defined as expected shortfall or as worst case loss. This represents the second contribution. The third contribution is a framework for scenario generation based on the "Birth, Immigration, Death, Emigration" (BIDE) population model and the Empirical copula; the scenarios are used to evaluate the proposed models and their special cases both in-sample and out-of-sample. As an application, we consider the planning problem of a large DB pension fund in Saudi Arabia.
65

Intensity in Phonological Intervention: Is There a Prescribed Amount?

Williams, A. 01 October 2012 (has links)
Despite a number of studies that have demonstrated positive outcomes for inducing clinical change in children with speech sound disorders (SSD), there is a need to address the question of whether resources are being applied in an optimal manner. As a consequence, there has been a call to look within interventions to examine parameters that may contribute to intervention outcomes; specifically the intensity of intervention (dose, frequency, duration, and cumulative intervention intensity). In this paper, empirical evidence from three intervention studies using multiple oppositions primarily, and a second contrastive approach, minimal pairs, is reported with regard to the parameters of intervention intensity. The findings indicated that greater intensity yields greater treatment outcomes. Further, quantitative and qualitative changes in intensity occur as intervention progresses, and there were differences in intensity based on severity of the SSD. Based on these data, suggestions were made toward establishing some prescribed amounts of intensity to affect treatment outcomes for children with SSD.
66

Mobility Management and Climate Change Policies

Robèrt, Markus January 2007 (has links)
Globally, the transport system faces a paradigmatic shift where, in addition to increased local traffic problems, climate change and depletion of fossil oil reserves will foster a successive transition to renewable fuels and a need for more resource-efficient mobility management and communication alternatives. Foresighted countries, cities or companies taking the lead in adapting to these tougher conditions might well not only solve those problems, but also turn the problems into business advantages. This thesis is based on six studies that attempt to develop future strategies based on rigorous principled emission and energy efficiency targets and to modulate the impact of travel policies, technical components and behaviours in economically advantageous ways. The modelling frameworks developed throughout the thesis build on a target-orientated approach called backcasting, where the following general components are applied: (1) target description at a conceptual level i.e. the potential for sustainable energy systems, emissions, costs, behavioural patterns, preferences, etc.; (2) mapping of the current situation in relation to target description; and (3) modelling of alternative sets of policies, technologies, behaviours and economic prerequisites to arrive at target achievement. Sustainable travel strategies are analysed from two main viewpoints. The first four studies focus on company travel planning, where behavioural modelling proved to be an important tool for deriving targetorientated travel policies consistent with employee preferences. The latter two studies focus on strategies and preconditions to meet future emission targets and energy efficiency requirements at a macroscopic regional level by 2030. Backcasting’s role as a generic methodology for effective strategic planning is discussed. / QC 20100816
67

The differences between SSD and HDD technology regarding forensic investigations

Geier, Florian January 2015 (has links)
In the past years solid state disks have developed drastically and are now gaining increased popularity compared to conventional hard drives. While hard disk drives work predictable, transparent SSD routines work in the background without the user’s knowledge. This work describes the changes to the everyday life for forensic specialists; a forensic investigation includes data recovery and the gathering of a digital image of each acquired memory that provides proof of integrity through a checksum. Due to the internal routines, which cannot be stopped, checksums are falsified. Therefore the images cannot prove integrity of evidence anymore. The report proves the inconsistence of checksums of SSD and shows the differences in data recovery through high recovery rates on hard disk drives while SSD drives scored no recovery or very poor rates.
68

Enhancements to SQLite Library to Improve Performance on Mobile Platforms

Sambasivan Ramachandran, Shyam 16 December 2013 (has links)
This thesis aims to present solutions to improve the performance of SQLite library on mobile systems. In particular, two approaches are presented to add lightweight locking mechanisms to the SQLite library and improve concurrency of the SQLite library on Android operating system. The impact on performance is discussed after each section. Many applications on the Android operating system rely on the SQLite library to store ordered data. However, due to heavy synchronization primitives used by the library, it becomes a performance bottleneck for applications which push large amount of data into the database. Related work in this area also points to SQLite database as one of the factors for limiting performance. With increasing network speeds, the storage system can become a performance bottleneck, as applications download larger amounts of data. The following work in this thesis addresses these issues by providing approaches to increase concurrency and add light-weight locking mechanisms. The factors determining the performance of Application Programming Interfaces provided by SQLite are first gathered from IO traces of common database operations. By analyzing these traces, opportunities for improvements are noticed. An alternative locking mechanism is added to the database file using byte-range locks for fine-grained locking. Its impact on performance is measured using SQLite benchmarks as well as real applications. A multi-threaded benchmark is designed to measure the performance of fine grained locking in multi-threaded applications using common database operations. Recent versions of SQLite use write ahead logs for journaling. We see that writes to this sequential log can occur concurrently, especially in flash drives. By adding a sequencing mechanism for the write ahead log, the writes can proceed simultaneously. The performance of this method is also analyzed using the synthetic benchmarks and multi-threaded benchmarks. By using these mechanisms, the library is observed to gain significant performance for concurrent writes.
69

Μελέτη και κατασκευή συστήματος ανελκυστήρα σε μικρή κλίμακα

Χαμπέρη, Γεωργία 07 June 2013 (has links)
Η παρούσα διπλωματική εργασία, η οποία εκπονήθηκε στο Εργαστήριο Ηλεκτρομηχανικής Μετατροπής Ενέργειας του Τμήματος Ηλεκτρολόγων Μηχανικών και Τεχνολογίας Υπολογιστών της Πολυτεχνικής Σχολής του Πανεπιστημίου Πατρών, πραγματεύεται την κατασκευή ενός επιβατηγού ανελκυστήρα σε μικρή κλίμακα. Σκοπός είναι η μελέτη και η κατασκευή της μικρογραφίας ενός ανελκυστήρα για την μεταφορά προσώπων, του οποίου η λειτουργία θα ελέγχεται μέσω τυπωμένου κυκλώματος. Αρχικά παρατίθεται μια σύντομη ανάλυση της ιστορίας των ανελκυστήρων, τα κυριότερα μέρη τους, των διαφόρων κατηγοριών που υπάρχουν αλλά και κάποιων βασικών στοιχείων τους. Στη συνέχεια, περιγράφεται η αρχή λειτουργίας των ασύγχρονων τριφασικών κινητήρων και παρουσιάζονται τα βασικά κατασκευαστικά χαρακτηριστικά και τα είδη αυτών. Ακολουθεί μια σύντομη περιγραφή του μικροεπεξεργαστή που χρησιμοποιήθηκε, καθώς και των δυνατοτήτων που προσφέρει. Το επόμενο βήμα είναι η ανάλυση της λογικής που θα ακολουθεί ο ανελκυστήρας κατά την λειτουργία του, η περιγραφή των εξαρτημάτων που χρησιμοποιήθηκαν για την υλοποίηση της παραπάνω λογικής αλλά και του τρόπου σύνδεση τους με τους ακροδέκτες των δυο μικροελεγκτών. Για την ευκολότερη υλοποίηση της λογικής που υιοθετήθηκε, σχεδιάστηκαν και κατασκευάστηκαν δύο τυπωμένα κυκλώματα. Το πρώτο ελέγχει όλα τα στοιχεία που υπάρχουν στο εσωτερικό του θαλάμου, ενώ το δεύτερο όσα βρίσκονται εξωτερικά και συγκεκριμένα τα μπουτόν κλήσης των ορόφων, τους αισθητήρες προσέγγισης και τα ρελέ που οδηγούν τον κινητήρα. Επιπλέον, γίνεται μια σύντομη αναφορά στο δίκτυο CAN το οποίο χρησιμοποιήθηκε για την επικοινωνία των τυπωμένων κυκλωμάτων που κατασκευάστηκαν. Επίσης, παρατίθονται τα πλήρη διαγράμματα ροής του κώδικα και τα σχέδια των τυπωμένων κυκλωμάτων. Τέλος, ακολουθούν τα συμπεράσματα από την λειτουργία της κατασκευής και τα σχόλια για μελλοντικές βελτιώσεις. / The purpose of this thesis is the construction of a passenger elevator in a small scale. The work was conducted in the Laboratory of Electromechanical Energy Conversion, in the Department of Electrical and Computer Engineering at University of Patras. The main aim is the study and the construction of the miniature of an elevator for transporting persons, whose function will be controlled through printed circuit boards. Initially, there is a brief overview of the history and the main parts of elevators, along with the various categories in which they are divided. Secondly, the construction of three-phase asynchronous motors, their types and the principle of their operation are described. In addition, there is a short description of the microprocessor which is used and its capabilities, in order to facilitate the study and design of the structure. Furthermore, the logic of the elevator, the elements which are necessary to implement the above logic and how to connect them to the pins of the two microcontrollers are given. For easier implementation of the logic which was adopted, two printed circuits were designed and manufactured. The first circuit controls all the elements that exist inside the cabin, and the second controls the call buttons on every floor, the proximity sensors and the relays which drive the motor. Moreover, there is a brief reference to the CAN network which is used for the communication of the microcontrollers. The full flow chart of the code and the schematics of the printed-circuit boards are also provided. Finally, the conclusions that derive from the construction and the operation of the elevator are presented and future improvements concerning the construction are proposed.
70

Riverhelp!: sistema de suporte a decisões para planejamento e gerenciamento integrado de recursos hídricos / Riverhelp!: decision suport system for integrated water resources planning and management

Guilherme de Lima 31 August 2007 (has links)
Esta pesquisa apresenta um sistema de suporte a decisão (SSD) para o planejamento e gerenciamento integrado de bacias hidrográficas, denominado Riverhelp!, e também sugere nova metodologia para o uso desse tipo de ferramenta de análise. O objetivo geral é desenvolver um SSD que possa auxiliar a gestão de recursos hídricos. Para isso o SSD utiliza e integra tecnologias avançadas em um só sistema computacional flexível e que pode ser utilizado e entendido por especialistas e outros participantes do processo decisório. O programa é composto por quatro módulos principais e tem código aberto baseado na tecnologia OpenMI o que permite aos usuários alterar e incluir funções. Outra propriedade que merece destaque é sua completa integração com um sistema de informações geográficas permitindo a análise temporal e espacial da bacia hidrográfica. Essa ferramenta pode, por exemplo, ser usada para análises de disponibilidade de água em quantidade e qualidade, para o estudo de ecossistemas, para a otimização e operação de reservatórios e para auxiliar no processo de outorga de direito de uso da água. Uma aplicação do Riverhelp! para as bacias hidrográficas dos rios Piracicaba, Capivari e Jundiaí ilustra as diversas possibilidades de uso do sistema. Os resultados dessa investigação reforçam a importância e a necessidade de uma nova geração de SSD que considerem e analisem de maneira integrada os complexos assuntos relacionados à gestão da bacia hidrográfica. O desenvolvimento do Riverhelp! contribui significativamente para o avanço das pesquisas relacionadas ao tema de planejamento e gerenciamento integrado de recursos hídricos fornecendo um SSD com características únicas, que associa ferramentas para avaliação ambiental, modelos de simulação e otimização de qualidade e quantidade de água superficial e subterrânea, sistema de informações geográficas, diferentes bancos de dados, funções para análises estatísticas e técnicas multiobjetivo para análise de cenários. / This research presents a decision support system (DSS) for integrated water resources planning and management (IWRM) named Riverhelp! and suggests new methodology for the use of this kind of analysis tool as well. The general goal is to develop an appropriate DSS that can help IWRM process. In order to do so, the DSS uses and combines advanced technologies and techniques currently available. It integrates different methodologies in just one computational system, which is flexible and can be used and understood not only by specialists but also by general users who are not familiar with modelling. The DSS Riverhelp! has four main building blocks and an open computational core based on the OpenMI technology which allows users to access it programmatically and to add new tools or to change computations in almost any way they want. Another important characteristic is that the DSS Riverhelp! is fully integrated with a geographic information system (GIS). Most of all, it puts time and spatial information together and is therefore a great package for data management of river basins. This system can be used for hydrological analyses, assessment of water availability, water resources planning, water quality and quantity studies, reservoir systems nonlinear optimization and so on. In this research a Riverhelp! application for the Piraciaba, Capivari and Jundiaí river basins shows several possibilities for its use. The results of this study emphasize the importance and necessity of a new generation of decision support systems that are able to analyze and take into account the complex issues associated to river basin management in an integrated approach. The Riverhelp! development makes significant contributions for research advance in the integrated water resources planning and management field providing a DDS with unique characteristics that combines environmental assessment tools, water quality and quantity simulation and optimization models for surface water and groundwater, geographic information system, sophisticated databases, statistical tools and multicriteria techniques for scenario analyses.

Page generated in 0.4067 seconds