• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 59
  • 12
  • 8
  • 5
  • 4
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 179
  • 179
  • 74
  • 70
  • 67
  • 59
  • 30
  • 29
  • 24
  • 24
  • 22
  • 20
  • 20
  • 20
  • 19
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
101

Energy Agile Cluster Communication

Mustafa, Muhammad Zain 18 March 2015 (has links)
Computing researchers have long focused on improving energy-efficiency?the amount of computation per joule? under the implicit assumption that all energy is created equal. Energy however is not created equal: its cost and carbon footprint fluctuates over time due to a variety of factors. These fluctuations are expected to in- tensify as renewable penetration increases. Thus in my work I introduce energy-agility a design concept for a platform?s ability to rapidly and efficiently adapt to such power fluctuations. I then introduce a representative application to assess energy-agility for the type of long-running, parallel, data-intensive tasks that are both common in data centers and most amenable to delays from variations in available power. Multiple variants of the application are implemented to illustrate the fundamental tradeoffs in designing energy-agile parallel applications. I find that with inactive power state transition latencies of up to 15 seconds, a design that regularly ”blinks” servers out- performs one that minimizes transitions by only changing power states when power varies. While the latter approach has much lower transition overhead, it requires additional I/O, since servers are not always concurrently active. Unfortunately, I find that most server-class platforms today are not energy-agile: they have transition la- tencies beyond one minute, forcing them to minimize transition and incur additional I/O.
102

An Interactive Visualization Model for Analyzing Data Storage System Workloads

Pungdumri, Steven Charubhat 01 March 2012 (has links)
The performance of hard disks has become increasingly important as the volume of data storage increases. At the bottom level of large-scale storage networks is the hard disk. Despite the importance of hard drives in a storage network, it is often difficult to analyze the performance of hard disks due to the sheer size of the datasets seen by hard disks. Additionally, hard drive workloads can have several multi-dimensional characteristics, such as access time, queue depth and block-address space. The result is that hard drive workloads are extremely diverse and large, making extracting meaningful information from hard drive workloads very difficult. This is one reason why there are several inefficiencies in storage networks. In this paper, we develop a tool that assists in communicating valuable insights into these datasets, resulting in an approach that utilizes parallel coordinates to model data storage workloads captured with bus analyzers. Users are presented with an effective visualization of workload captures with this implementation, along with methods to interact with and manipulate the model in order to more clearly analyze the lowest level of their storage systems. Design decisions regarding the feature set of this tool are based on the analysis needs of domain experts and feedback from a conducted user study. Results from our user study evaluations demonstrate the efficacy of our tool to observe valuable insights, which can potentially assist in future storage system design and deployment decisions. Using this tool, domain experts were able to model storage system datasets with various features to manipulate the visualization to make observations and discoveries, such as detecting logical block address banding and observe various dataset trends which were not readily noticeable using conventional analysis methods.
103

Pohon a vedení laserového ukazovátka / Drive and leading of laser pointer

Dostál, Petr January 2013 (has links)
This master thesis deals with a construction solution for traversing of laser pointer, which is used for position identification for stock removal from the storage unit SSI Logimat, including a construction solution of leading of this traversing. The first part contains a research of storage systems used in logistics, as well as systems upgraded with laser pointer. In main part are described the suggestions of leading of laser pointer traversing, together with the design of mechanical transmission and components associated with mechanical transmission. Subsequently the drive of spin of the laser and drive of mechanical transmission are designed. Selection of drives is verified by calculation. The final part contains the check of leading of traversing by strength calculation for bending, from which is suggested the design of leading reinforcement.
104

Studie skladového hospodářství distribučního centra vybrané společnosti / Study of Warehouse Management in Distribution Center of Selected Company

Medveďová, Klára January 2017 (has links)
This thesis deals with the business processes related to warehouse management in one of distribution centers of the brewing group in which this draft leads to an up-to-date solution to the use of new warehouse and infromation technologies.
105

Regulation and Control of AC Microgrid Systems with Renewable Generation and Battery Energy Storage System

Zhao, Huiying January 2018 (has links)
No description available.
106

Design of an Open-Source Sata Core for Virtex-4 FPGAs

Gorman, Cory 01 January 2013 (has links) (PDF)
Many hard drives manufactured today use the Serial ATA (SATA) protocol to communicate with the host machine, typically a PC. SATA is a much faster and much more robust protocol than its predecessor, ATA (also referred to as Parallel ATA or IDE). Many hardware designs, including those using Field-Programmable Gate Arrays (FPGAs), have a need for a long-term storage solution, and a hard drive would be ideal. One such design is the high-speed Data Acquisition System (DAS) created for the NASA Surface Water and Ocean Topography mission. This system utilizes a Xilinx Virtex-4 FPGA. Although the DAS includes a SATA connector for interfacing with a disk, a SATA core is needed to implement the protocol for disk operations. In this work, an open-source SATA core for Virtex-4 FPGAs has been created. SATA cores for Virtex-5 and Virtex-6 devices were already available, but they are not compatible with the different serial transceivers in the Virtex-4. The core can interface with disks at SATA I or SATA II speeds, and has been shown working at rates up to 180MB/s. It has been successfully integrated into the hardware design of the DAS board so that radar samples can be stored on the disk.
107

Relevance Analysis for Document Retrieval

Labouve, Eric 01 March 2019 (has links) (PDF)
Document retrieval systems recover documents from a dataset and order them according to their perceived relevance to a user’s search query. This is a difficult task for machines to accomplish because there exists a semantic gap between the meaning of the terms in a user’s literal query and a user’s true intentions. Even with this ambiguity that arises with a lack of context, users still expect that the set of documents returned by a search engine is both highly relevant to their query and properly ordered. The focus of this thesis is on document retrieval systems that explore methods of ordering documents from unstructured, textual corpora using text queries. The main goal of this study is to enhance the Okapi BM25 document retrieval model. In doing so, this research hypothesizes that the structure of text inside documents and queries hold valuable semantic information that can be incorporated into the Okapi BM25 model to increase its performance. Modifications that account for a term’s part of speech, the proximity between a pair of related terms, the proximity of a term with respect to its location in a document, and query expansion are used to augment Okapi BM25 to increase the model’s performance. The study resulted in 87 modifications which were all validated using open source corpora. The top scoring modification from the validation phase was then tested under the Lisa corpus and the model performed 10.25% better than Okapi BM25 when evaluated under mean average precision. When compared against two industry standard search engines, Lucene and Solr, the top scoring modification largely outperforms these systems by upwards to 21.78% and 23.01%, respectively.
108

Latency Tradeoffs in Distributed Storage Access

Ray, Madhurima January 2019 (has links)
The performance of storage systems is central to handling the huge amount of data being generated from a variety of sources including scientific experiments, social media, crowdsourcing, and from an increasing variety of cyber-physical systems. The emerging high-speed storage technologies enable the ingestion of and access to such large volumes of data efficiently. However, the combination of high data volume requirements of new applications that largely generate unstructured and semistructured streams of data combined with the emerging high-speed storage technologies pose a number of new challenges, including the low latency handling of such data and ensuring that the network providing access to the data does not become the bottleneck. The traditional relational model is not well suited for efficiently storing and retrieving unstructured and semi-structured data. An alternate mechanism, popularly known as Key-Value Store (KVS) has been investigated over the last decade to handle such data. A KVS store only needs a 'key' to uniquely identify the data record, which may be of variable length and may or may not have further structure in the form of predefined fields. Most of the KVS in existence have been designed for hard-disk based storage (before the SSDs gain popularity) where avoiding random accesses is crucial for good performance. Unfortunately, as the modern solid-state drives become the norm as the data center storage, the HDD-based KV structures result in high read, write, and space amplifications which becomes detrimental to both the SSD’s performance and endurance. Also note that regardless of how the storage systems are deployed, access to large amounts of storage by many nodes must necessarily go over the network. At the same time, the emerging storage technologies such as Flash, 3D-crosspoint, phase change memory (PCM), etc. coupled with highly efficient access protocols such as NVMe are capable of ingesting and reading data at rates that challenge even the leading edge networking technologies such as 100Gb/sec Ethernet. At the same time, some of the higher-end storage technologies (e.g., Intel Optane storage based on 3-D crosspoint technology, PCM, etc.) coupled with lean protocols like NVMe are capable of providing storage access latencies in the 10-20$\mu s$ range, which means that the additional latency due to network congestion could become significant. The purpose of this thesis is to addresses some of the aforementioned issues. We propose a new hash-based and SSD-friendly key-value store (KVS) architecture called FlashKey which is especially designed for SSDs to provide low access latencies, low read and write amplification, and the ability to easily trade-off latencies for any sequential access, for example, range queries. Through detailed experimental evaluation of FlashKey against the two most popular KVs, namely, RocksDB and LevelDB, we demonstrate that even as an initial implementation we are able to achieve substantially better write amplification, average, and tail latency at a similar or better space amplification. Next, we try to deal with network congestion by dynamically replicating the data items that are heavily used. The tradeoff here is between the latency and the replication or migration overhead. It is important to reverse the replication or migration as the congestion fades away since our observation tells that placing data and applications (that access the data) together in a consolidated fashion would significantly reduce the propagation delay and increase the network energy-saving opportunities which is required as the data center network nowadays are equipped with high-speed and power-hungry network infrastructures. Finally, we designed a tradeoff between network consolidation and congestion. Here, we have traded off the latency to save power. During the quiet hours, we consolidate the traffic is fewer links and use different sleep modes for the unused links to save powers. However, as the traffic increases, we reactively start to spread out traffic to avoid congestion due to the upcoming traffic surge. There are numerous studies in the area of network energy management that uses similar approaches, however, most of them do energy management at a coarser time granularity (e.g. 24 hours or beyond). As opposed to that, our mechanism tries to steal all the small to medium time gaps in traffic and invoke network energy management without causing a significant increase in latency. / Computer and Information Science
109

Powering Stability : Grid-Connected Batteries Influence on Peak Electricity Pricing

Holm, Emil, Shayeganfar, Parsa January 2024 (has links)
Battery Energy Storage Systems (BESSs) have become an increasingly popular feature of the electrical grid in the California ISO (CAISO) as a means to address the challenges posed by renewable energy variability and escalating peak demand. Due to their ability to reduce peak load demand on traditional generators and extend the benefits of the merit order effect, they have been theorized and claimed to reduce peak electricity prices. The purpose of this study is to test these claims within CAISO and understand what effects BESSs have had on peak electricity prices. Our findings show that there has been a significant decrease in prices after the introduction of BESSs into the grid although we found no significant effects of an increasing utilization of BESSs on peak electricity prices. We conclude that BESS utilization in CAISO has had no effect on peak electricity prices. We are contributing to the literature on the tangible market impacts of BESSs, highlighting the need for further empirical research in this domain.
110

Caractérisation et modélisation de composants de stockage électrochimique et électrostatique / Characterization and modeling of electrochemical and electrostatic storage components

Devillers, Nathalie 29 November 2012 (has links)
Dans le domaine aéronautique, l'optimisation du rendement énergétique global, la réduction des masses embarquées et la nécessité de répondre aux besoins énergétiques croissants conduisent à développer de nouvelles technologies et méthodes pour générer l'énergie électrique à bord, pour la distribuer, la convertir et la stocker. Dans cette thèse, des éléments de stockage de l'énergie électrique sont caractérisés dans l'optique d'être modélisés. Parmi les différents systèmes de stockage, présentés dans un état de l'art préliminaire, sont retenus les supercondensateurs et les accumulateurs électrochimiques Lithium-ion polymère, considérés respectivement comme des sources de puissance et d'énergie, à l'échelle de l'application. Ces moyens de stockage sont caractérisés par chronopotentiométrie à courant constant et par spectrométrie d'impédance électrochimique. Les essais sont éffectués dans des conditions expérimentales, définissant le domaine de validités des modèles, en cohérence avec les contraintes de l'application finale. Différents modèles sont alors développés en fonction de leur utilisation : des modèles simples, fonctionnels et suffisants pour la gestion globale d'énergie et des modèles dynamiques, comportementaux et nécessaires pour l'analyse de la qualité du réseau. Ils sont ensuite validés sur des profils de mission. Pour disposer d'un système de stockage performant et en adéquation avec les besoins énergétiques de l'aéronef, une méthode de dimensionnement est proposée, associant des composants de stockage complémentaires. Un gestion fréquentielle des sources est mise en oeuvre de manière à minimiser la masse du système de stockage. / In aeronautics, the optimization of the global energetic efficiency, the reduction of the embedded weight and the need to meet the growing energetic requirements lead to develop new technologies and methods to generate electrical energy, to distribute it, to convert it and to store it aboard. In this thesis, electrical energy storage systems are characterized with a view to be modeled. Among varied storage systems, presented in an introductory state of the art, ultracapacitors and Lithium-ion polymer secondary batteries are studied. These components are considered respectively as power and energy sources, in regards to the application scale. These storage systems are characterized by chronopotentiometry at constant current and by electrochemical impedance spectrometry. Tests are carried out in experimental conditions which define the validity area of modeling, in relation with the application constraints. Different models are developed according to their future use : simple models, which are functional and sufficient for the global energy management, and dynamics models, which are behavioral and necessary for the analysis of the network quality. Then, they validated thanks to mission profiles. Finally, to dispose of an efficient storage system that meets the energetic requirements of the aircraft, a sizing method is suggested by combining complementary storage systems. An energy management based on frequency approach is implemented in order to minimize the storage system weight.

Page generated in 0.0435 seconds