• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 56
  • 23
  • 13
  • 8
  • 6
  • 5
  • 5
  • 4
  • 3
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 221
  • 221
  • 86
  • 73
  • 48
  • 43
  • 32
  • 25
  • 24
  • 22
  • 20
  • 18
  • 17
  • 17
  • 16
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
131

Energy Agile Cluster Communication

Mustafa, Muhammad Zain 18 March 2015 (has links)
Computing researchers have long focused on improving energy-efficiency?the amount of computation per joule? under the implicit assumption that all energy is created equal. Energy however is not created equal: its cost and carbon footprint fluctuates over time due to a variety of factors. These fluctuations are expected to in- tensify as renewable penetration increases. Thus in my work I introduce energy-agility a design concept for a platform?s ability to rapidly and efficiently adapt to such power fluctuations. I then introduce a representative application to assess energy-agility for the type of long-running, parallel, data-intensive tasks that are both common in data centers and most amenable to delays from variations in available power. Multiple variants of the application are implemented to illustrate the fundamental tradeoffs in designing energy-agile parallel applications. I find that with inactive power state transition latencies of up to 15 seconds, a design that regularly ”blinks” servers out- performs one that minimizes transitions by only changing power states when power varies. While the latter approach has much lower transition overhead, it requires additional I/O, since servers are not always concurrently active. Unfortunately, I find that most server-class platforms today are not energy-agile: they have transition la- tencies beyond one minute, forcing them to minimize transition and incur additional I/O.
132

Optimalizace čtení dat z distribuované databáze / Optimization of data reading from a distributed database

Kozlovský, Jiří January 2019 (has links)
This thesis is focused on optimization of data reading from distributed NoSQL database Apache HBase with regards to the desired data granularity. The assignment was created as a product request from Seznam.cz, a.s. the Reklama division, Sklik.cz cost center to improve user experience by making filtering of aggregated statistical data available to advertiser web application users for the purpose of viewing entity performance history.
133

An Interactive Visualization Model for Analyzing Data Storage System Workloads

Pungdumri, Steven Charubhat 01 March 2012 (has links)
The performance of hard disks has become increasingly important as the volume of data storage increases. At the bottom level of large-scale storage networks is the hard disk. Despite the importance of hard drives in a storage network, it is often difficult to analyze the performance of hard disks due to the sheer size of the datasets seen by hard disks. Additionally, hard drive workloads can have several multi-dimensional characteristics, such as access time, queue depth and block-address space. The result is that hard drive workloads are extremely diverse and large, making extracting meaningful information from hard drive workloads very difficult. This is one reason why there are several inefficiencies in storage networks. In this paper, we develop a tool that assists in communicating valuable insights into these datasets, resulting in an approach that utilizes parallel coordinates to model data storage workloads captured with bus analyzers. Users are presented with an effective visualization of workload captures with this implementation, along with methods to interact with and manipulate the model in order to more clearly analyze the lowest level of their storage systems. Design decisions regarding the feature set of this tool are based on the analysis needs of domain experts and feedback from a conducted user study. Results from our user study evaluations demonstrate the efficacy of our tool to observe valuable insights, which can potentially assist in future storage system design and deployment decisions. Using this tool, domain experts were able to model storage system datasets with various features to manipulate the visualization to make observations and discoveries, such as detecting logical block address banding and observe various dataset trends which were not readily noticeable using conventional analysis methods.
134

Data storage for a small lumberprocessing company in Sweden

Bäcklund, Simon, Ljungdahl, Albin January 2021 (has links)
The world is becoming increasingly digitized, and with this trend comes an increas-ing need for storing data for companies of all sizes. For smaller enterprises, thiscould prove to be a major challenge due to limitations in knowledge and financialassets. So the purpose of this study is to investigate how smaller companies cansatisfy their needs for data storage and which database management system to usein order to not let their shortcomings hold their development and growth back. Tofulfill this purpose, a small wood processing company in Sweden is examined andused as an example. To investigate and answer the problem, literary research is con-ducted to gain knowledge about data storage and the different options for this thatexist. Microsoft Access, MySQL, and MongoDB are selected for evaluation andtheir performance is compared in controlled experiments. The results of this studyindicates that, due to the small amount of data that the example company possesses,the simplicity of Microsoft Access trumps the high performance of its competitors.However, with increasingly developed internet infrastructure, the option of hostinga database in the cloud has become a feasible option. If hosting the database in thecloud is the desired solution, Microsoft Access has a higher operating cost than theother alternatives, making MySQL come out on top.
135

Design of an Open-Source Sata Core for Virtex-4 FPGAs

Gorman, Cory 01 January 2013 (has links) (PDF)
Many hard drives manufactured today use the Serial ATA (SATA) protocol to communicate with the host machine, typically a PC. SATA is a much faster and much more robust protocol than its predecessor, ATA (also referred to as Parallel ATA or IDE). Many hardware designs, including those using Field-Programmable Gate Arrays (FPGAs), have a need for a long-term storage solution, and a hard drive would be ideal. One such design is the high-speed Data Acquisition System (DAS) created for the NASA Surface Water and Ocean Topography mission. This system utilizes a Xilinx Virtex-4 FPGA. Although the DAS includes a SATA connector for interfacing with a disk, a SATA core is needed to implement the protocol for disk operations. In this work, an open-source SATA core for Virtex-4 FPGAs has been created. SATA cores for Virtex-5 and Virtex-6 devices were already available, but they are not compatible with the different serial transceivers in the Virtex-4. The core can interface with disks at SATA I or SATA II speeds, and has been shown working at rates up to 180MB/s. It has been successfully integrated into the hardware design of the DAS board so that radar samples can be stored on the disk.
136

Relevance Analysis for Document Retrieval

Labouve, Eric 01 March 2019 (has links) (PDF)
Document retrieval systems recover documents from a dataset and order them according to their perceived relevance to a user’s search query. This is a difficult task for machines to accomplish because there exists a semantic gap between the meaning of the terms in a user’s literal query and a user’s true intentions. Even with this ambiguity that arises with a lack of context, users still expect that the set of documents returned by a search engine is both highly relevant to their query and properly ordered. The focus of this thesis is on document retrieval systems that explore methods of ordering documents from unstructured, textual corpora using text queries. The main goal of this study is to enhance the Okapi BM25 document retrieval model. In doing so, this research hypothesizes that the structure of text inside documents and queries hold valuable semantic information that can be incorporated into the Okapi BM25 model to increase its performance. Modifications that account for a term’s part of speech, the proximity between a pair of related terms, the proximity of a term with respect to its location in a document, and query expansion are used to augment Okapi BM25 to increase the model’s performance. The study resulted in 87 modifications which were all validated using open source corpora. The top scoring modification from the validation phase was then tested under the Lisa corpus and the model performed 10.25% better than Okapi BM25 when evaluated under mean average precision. When compared against two industry standard search engines, Lucene and Solr, the top scoring modification largely outperforms these systems by upwards to 21.78% and 23.01%, respectively.
137

TRUE UNMANNED TELEMETRY COLLECTION USING OC-12 NETWORK DATA FORWARDING

Bullers, Bill 10 1900 (has links)
International Telemetering Conference Proceedings / October 20-23, 2003 / Riviera Hotel and Convention Center, Las Vegas, Nevada / The cost of telemetry collection is significantly reduced by unmanned store and forward systems made possible using 622MHz OC-12 networks. Networks are readily available to telemetry system architects. The in-band control of remote unmanned collection platforms is handled through a Java browser interface. Data from many telemetry channels are collected and temporarily stored on a digital disk system designed around the OC-12 network. The I/O, storage, and network components are configured, set, and initialized remotely. Recordings are started and stopped on command and can be made round-the-clock. Files of stored, time stamped data are delivered at the rate of OC-12 to a distribution center.
138

Small angle neutron scattering studies of magnetic recording media

Wismayer, Matthew P. January 2008 (has links)
In the beginning of the twenty-first century, educational and commercial institutions have driven the demand for cheap and efficient data storage. The storage medium known as magnetic recording media has remained the mainstay for most computer systems due to its large storage capacity per dollar. With the recording media's ever-increasing storage density has come reductions in the magnetic grain size per bit. At the recording bit's density threshold, the magnetic grains become more susceptible to thermal activation, which can render the storage medium unusable. An accurate characterisation of the recording layer's sub-granular structure is essential for understanding the magnetic and thermal mechanisms of high-density recording media. Small-Angle Neutron Scattering (SANS) studies have been performed to investigate the magnetic and physical properties of longitudinal and perpendicular recording grains. The SANS studies of longitudinal magnetic recording media have probed the recording layer's magnetic grain size at a sub-nanometer resolution. In conjunction with these studies, SQUID magnetometry was used to characterise the recording grain's bulk magnetism. Measurements showed that the recording grain was composed of a ferromagnetic hard core (Co-enriched) and a weakly magnetic shell (Cr-enriched). These results provided important information on the grain's magnetic anisotropy, which determines the recording media's magnetic stability. The polarised SANS studies were used to characterise the recording layer's physical granular structure. It was shown that the physical grain size was comparable to its magnetic counterpart. These physical measurements provided insight into the recording grain's chemical composition. The magnetic properties of perpendicular magnetic recording media were studied using SANS and VSM measurements. The neutron scattering studies revealed that the recording grain was composed of a hard ferromagnetic centre enriched with cobalt. The VSM studies showed that the magnetic recording grains exhibited a large perpendicular magnetic anisotropy. These combined studies provided information on the recording grain's ferromagnetic composition and magnetic stability. The polarised SANS measurements showed the physical grain size to be slightly smaller than its magnetic counterpart. This size difference was attributed to the non-magnetic grain boundary composed of SiO2. The boundary thickness determined the degree of inter-granular exchange coupling. Further polarised studies investigated the recording layers switching behaviour, which revealed more information on the grain's magnetic stability.
139

Atomar aufgelöste Strukturuntersuchungen für das Verständnis der magnetischen Eigenschaften von FePt-HAMR-Prototypmedien

Wicht, Sebastian 20 December 2016 (has links) (PDF)
Dank der hohen uniaxialen Kristallanisotropie der L10-geordneten Phase gelten nanopartikuläre FePt+C-Schichten als aussichtsreiche Kandidaten zukünftiger Datenspeichersysteme. Aus diesem Grund werden in der vorliegenden Arbeit in Kooperation mit HGST- A Western Digital Company Prototypen solcher Medien strukturell bis hin zu atomarer Auflösung charakterisiert. Anhand von lokalen Messungen der Gitterparameter der FePt-Partikel wird gezeigt, dass die Partikel dünne, zementitartige Verbindungen an ihrer Oberfläche aufweisen. Zusätzlich werden große Partikel mit kleinem Oberfläche-Volumen-Verhältnis von kontinuierlichen Kohlenstoffschichten umschlossen, was die Deposition weiteren Materials verhindert. Eine Folge davon ist die Entstehung einer zweiten Lage statistisch orientierter Partikel, die sich negativ auf das magnetische Verhalten der FePt-Schicht auswirkt. Weiterhin wird die besondere Bedeutung des eingesetzten Substrats sowie seiner Gitterfehlpassung zur L10-geordneten Einheitszelle nachgewiesen. So lässt sich das Auftreten fehlorientierter ebenso wie das L12-geordneter Kristallite im Fall großer Fehlpassung und einkristalliner Substrate unterdrücken, was andererseits jedoch zu einer stärkeren Verkippung der [001]-Achsen der individuellen FePt-Partikel führt. Abschließend wird mithilfe der Elektronenholographie nachgewiesen, dass die Magnetisierungsrichtungen der FePt-Partikel aufgrund von Anisotropieschwankungen von den [001]-Achsen abweichen können. / Highly textured L10-ordered FePt+C-films are foreseen to become the next generation of magnetic data storage media. Therefore prototypes of such media (provided by HGST- A Western Digital Company) are structurally investigated down to the atomic level by HR-TEM and the observed results are correlated to the magnetic performance of the film. In a first study the occurrence of a strongly disturbed surface layer with a lattice spacing that corresponds to cementite is observed. Furthermore the individual particles are surrounded by a thin carbon layer that suppresses the deposition of further material and leads, therefore, to the formation of a second layer of particles. Without a contact to the seed layer these particles are randomly oriented and degrade the magnetic performance of the media. A further study reveals, that a selection of single-crystalline substrates with appropriate lattice mismatch to the L10-ordered unit cell can be applied to avoid the formation of in-plane oriented and L12-ordered crystals. Unfortunately, the required large mismatch results in a broadening of the texture of the [001]-axes of the individual grains. As electron holography studies reveal, the orientation of the magnetization of the individual grains can differ from the structural [001]-axis due to local fluctuations of the uniaxial anisotropy.
140

Optimizing Virtual Machine I/O Performance in Cloud Environments

Lu, Tao 01 January 2016 (has links)
Maintaining closeness between data sources and data consumers is crucial for workload I/O performance. In cloud environments, this kind of closeness can be violated by system administrative events and storage architecture barriers. VM migration events are frequent in cloud environments. VM migration changes VM runtime inter-connection or cache contexts, significantly degrading VM I/O performance. Virtualization is the backbone of cloud platforms. I/O virtualization adds additional hops to workload data access path, prolonging I/O latencies. I/O virtualization overheads cap the throughput of high-speed storage devices and imposes high CPU utilizations and energy consumptions to cloud infrastructures. To maintain the closeness between data sources and workloads during VM migration, we propose Clique, an affinity-aware migration scheduling policy, to minimize the aggregate wide area communication traffic during storage migration in virtual cluster contexts. In host-side caching contexts, we propose Successor to recognize warm pages and prefetch them into caches of destination hosts before migration completion. To bypass the I/O virtualization barriers, we propose VIP, an adaptive I/O prefetching framework, which utilizes a virtual I/O front-end buffer for prefetching so as to avoid the on-demand involvement of I/O virtualization stacks and accelerate the I/O response. Analysis on the traffic trace of a virtual cluster containing 68 VMs demonstrates that Clique can reduce inter-cloud traffic by up to 40%. Tests of MPI Reduce_scatter benchmark show that Clique can keep VM performance during migration up to 75% of the non-migration scenario, which is more than 3 times of the Random VM choosing policy. In host-side caching environments, Successor performs better than existing cache warm-up solutions and achieves zero VM-perceived cache warm-up time with low resource costs. At system level, we conducted comprehensive quantitative analysis on I/O virtualization overheads. Our trace replay based simulation demonstrates the effectiveness of VIP for data prefetching with ignorable additional cache resource costs.

Page generated in 0.0549 seconds