• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 56
  • 23
  • 13
  • 8
  • 6
  • 5
  • 5
  • 4
  • 3
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 220
  • 220
  • 85
  • 73
  • 48
  • 43
  • 32
  • 25
  • 24
  • 22
  • 20
  • 18
  • 17
  • 16
  • 16
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
131

Optimalizace čtení dat z distribuované databáze / Optimization of data reading from a distributed database

Kozlovský, Jiří January 2019 (has links)
This thesis is focused on optimization of data reading from distributed NoSQL database Apache HBase with regards to the desired data granularity. The assignment was created as a product request from Seznam.cz, a.s. the Reklama division, Sklik.cz cost center to improve user experience by making filtering of aggregated statistical data available to advertiser web application users for the purpose of viewing entity performance history.
132

An Interactive Visualization Model for Analyzing Data Storage System Workloads

Pungdumri, Steven Charubhat 01 March 2012 (has links)
The performance of hard disks has become increasingly important as the volume of data storage increases. At the bottom level of large-scale storage networks is the hard disk. Despite the importance of hard drives in a storage network, it is often difficult to analyze the performance of hard disks due to the sheer size of the datasets seen by hard disks. Additionally, hard drive workloads can have several multi-dimensional characteristics, such as access time, queue depth and block-address space. The result is that hard drive workloads are extremely diverse and large, making extracting meaningful information from hard drive workloads very difficult. This is one reason why there are several inefficiencies in storage networks. In this paper, we develop a tool that assists in communicating valuable insights into these datasets, resulting in an approach that utilizes parallel coordinates to model data storage workloads captured with bus analyzers. Users are presented with an effective visualization of workload captures with this implementation, along with methods to interact with and manipulate the model in order to more clearly analyze the lowest level of their storage systems. Design decisions regarding the feature set of this tool are based on the analysis needs of domain experts and feedback from a conducted user study. Results from our user study evaluations demonstrate the efficacy of our tool to observe valuable insights, which can potentially assist in future storage system design and deployment decisions. Using this tool, domain experts were able to model storage system datasets with various features to manipulate the visualization to make observations and discoveries, such as detecting logical block address banding and observe various dataset trends which were not readily noticeable using conventional analysis methods.
133

Data storage for a small lumberprocessing company in Sweden

Bäcklund, Simon, Ljungdahl, Albin January 2021 (has links)
The world is becoming increasingly digitized, and with this trend comes an increas-ing need for storing data for companies of all sizes. For smaller enterprises, thiscould prove to be a major challenge due to limitations in knowledge and financialassets. So the purpose of this study is to investigate how smaller companies cansatisfy their needs for data storage and which database management system to usein order to not let their shortcomings hold their development and growth back. Tofulfill this purpose, a small wood processing company in Sweden is examined andused as an example. To investigate and answer the problem, literary research is con-ducted to gain knowledge about data storage and the different options for this thatexist. Microsoft Access, MySQL, and MongoDB are selected for evaluation andtheir performance is compared in controlled experiments. The results of this studyindicates that, due to the small amount of data that the example company possesses,the simplicity of Microsoft Access trumps the high performance of its competitors.However, with increasingly developed internet infrastructure, the option of hostinga database in the cloud has become a feasible option. If hosting the database in thecloud is the desired solution, Microsoft Access has a higher operating cost than theother alternatives, making MySQL come out on top.
134

Design of an Open-Source Sata Core for Virtex-4 FPGAs

Gorman, Cory 01 January 2013 (has links) (PDF)
Many hard drives manufactured today use the Serial ATA (SATA) protocol to communicate with the host machine, typically a PC. SATA is a much faster and much more robust protocol than its predecessor, ATA (also referred to as Parallel ATA or IDE). Many hardware designs, including those using Field-Programmable Gate Arrays (FPGAs), have a need for a long-term storage solution, and a hard drive would be ideal. One such design is the high-speed Data Acquisition System (DAS) created for the NASA Surface Water and Ocean Topography mission. This system utilizes a Xilinx Virtex-4 FPGA. Although the DAS includes a SATA connector for interfacing with a disk, a SATA core is needed to implement the protocol for disk operations. In this work, an open-source SATA core for Virtex-4 FPGAs has been created. SATA cores for Virtex-5 and Virtex-6 devices were already available, but they are not compatible with the different serial transceivers in the Virtex-4. The core can interface with disks at SATA I or SATA II speeds, and has been shown working at rates up to 180MB/s. It has been successfully integrated into the hardware design of the DAS board so that radar samples can be stored on the disk.
135

Relevance Analysis for Document Retrieval

Labouve, Eric 01 March 2019 (has links) (PDF)
Document retrieval systems recover documents from a dataset and order them according to their perceived relevance to a user’s search query. This is a difficult task for machines to accomplish because there exists a semantic gap between the meaning of the terms in a user’s literal query and a user’s true intentions. Even with this ambiguity that arises with a lack of context, users still expect that the set of documents returned by a search engine is both highly relevant to their query and properly ordered. The focus of this thesis is on document retrieval systems that explore methods of ordering documents from unstructured, textual corpora using text queries. The main goal of this study is to enhance the Okapi BM25 document retrieval model. In doing so, this research hypothesizes that the structure of text inside documents and queries hold valuable semantic information that can be incorporated into the Okapi BM25 model to increase its performance. Modifications that account for a term’s part of speech, the proximity between a pair of related terms, the proximity of a term with respect to its location in a document, and query expansion are used to augment Okapi BM25 to increase the model’s performance. The study resulted in 87 modifications which were all validated using open source corpora. The top scoring modification from the validation phase was then tested under the Lisa corpus and the model performed 10.25% better than Okapi BM25 when evaluated under mean average precision. When compared against two industry standard search engines, Lucene and Solr, the top scoring modification largely outperforms these systems by upwards to 21.78% and 23.01%, respectively.
136

TRUE UNMANNED TELEMETRY COLLECTION USING OC-12 NETWORK DATA FORWARDING

Bullers, Bill 10 1900 (has links)
International Telemetering Conference Proceedings / October 20-23, 2003 / Riviera Hotel and Convention Center, Las Vegas, Nevada / The cost of telemetry collection is significantly reduced by unmanned store and forward systems made possible using 622MHz OC-12 networks. Networks are readily available to telemetry system architects. The in-band control of remote unmanned collection platforms is handled through a Java browser interface. Data from many telemetry channels are collected and temporarily stored on a digital disk system designed around the OC-12 network. The I/O, storage, and network components are configured, set, and initialized remotely. Recordings are started and stopped on command and can be made round-the-clock. Files of stored, time stamped data are delivered at the rate of OC-12 to a distribution center.
137

Small angle neutron scattering studies of magnetic recording media

Wismayer, Matthew P. January 2008 (has links)
In the beginning of the twenty-first century, educational and commercial institutions have driven the demand for cheap and efficient data storage. The storage medium known as magnetic recording media has remained the mainstay for most computer systems due to its large storage capacity per dollar. With the recording media's ever-increasing storage density has come reductions in the magnetic grain size per bit. At the recording bit's density threshold, the magnetic grains become more susceptible to thermal activation, which can render the storage medium unusable. An accurate characterisation of the recording layer's sub-granular structure is essential for understanding the magnetic and thermal mechanisms of high-density recording media. Small-Angle Neutron Scattering (SANS) studies have been performed to investigate the magnetic and physical properties of longitudinal and perpendicular recording grains. The SANS studies of longitudinal magnetic recording media have probed the recording layer's magnetic grain size at a sub-nanometer resolution. In conjunction with these studies, SQUID magnetometry was used to characterise the recording grain's bulk magnetism. Measurements showed that the recording grain was composed of a ferromagnetic hard core (Co-enriched) and a weakly magnetic shell (Cr-enriched). These results provided important information on the grain's magnetic anisotropy, which determines the recording media's magnetic stability. The polarised SANS studies were used to characterise the recording layer's physical granular structure. It was shown that the physical grain size was comparable to its magnetic counterpart. These physical measurements provided insight into the recording grain's chemical composition. The magnetic properties of perpendicular magnetic recording media were studied using SANS and VSM measurements. The neutron scattering studies revealed that the recording grain was composed of a hard ferromagnetic centre enriched with cobalt. The VSM studies showed that the magnetic recording grains exhibited a large perpendicular magnetic anisotropy. These combined studies provided information on the recording grain's ferromagnetic composition and magnetic stability. The polarised SANS measurements showed the physical grain size to be slightly smaller than its magnetic counterpart. This size difference was attributed to the non-magnetic grain boundary composed of SiO2. The boundary thickness determined the degree of inter-granular exchange coupling. Further polarised studies investigated the recording layers switching behaviour, which revealed more information on the grain's magnetic stability.
138

Atomar aufgelöste Strukturuntersuchungen für das Verständnis der magnetischen Eigenschaften von FePt-HAMR-Prototypmedien

Wicht, Sebastian 20 December 2016 (has links) (PDF)
Dank der hohen uniaxialen Kristallanisotropie der L10-geordneten Phase gelten nanopartikuläre FePt+C-Schichten als aussichtsreiche Kandidaten zukünftiger Datenspeichersysteme. Aus diesem Grund werden in der vorliegenden Arbeit in Kooperation mit HGST- A Western Digital Company Prototypen solcher Medien strukturell bis hin zu atomarer Auflösung charakterisiert. Anhand von lokalen Messungen der Gitterparameter der FePt-Partikel wird gezeigt, dass die Partikel dünne, zementitartige Verbindungen an ihrer Oberfläche aufweisen. Zusätzlich werden große Partikel mit kleinem Oberfläche-Volumen-Verhältnis von kontinuierlichen Kohlenstoffschichten umschlossen, was die Deposition weiteren Materials verhindert. Eine Folge davon ist die Entstehung einer zweiten Lage statistisch orientierter Partikel, die sich negativ auf das magnetische Verhalten der FePt-Schicht auswirkt. Weiterhin wird die besondere Bedeutung des eingesetzten Substrats sowie seiner Gitterfehlpassung zur L10-geordneten Einheitszelle nachgewiesen. So lässt sich das Auftreten fehlorientierter ebenso wie das L12-geordneter Kristallite im Fall großer Fehlpassung und einkristalliner Substrate unterdrücken, was andererseits jedoch zu einer stärkeren Verkippung der [001]-Achsen der individuellen FePt-Partikel führt. Abschließend wird mithilfe der Elektronenholographie nachgewiesen, dass die Magnetisierungsrichtungen der FePt-Partikel aufgrund von Anisotropieschwankungen von den [001]-Achsen abweichen können. / Highly textured L10-ordered FePt+C-films are foreseen to become the next generation of magnetic data storage media. Therefore prototypes of such media (provided by HGST- A Western Digital Company) are structurally investigated down to the atomic level by HR-TEM and the observed results are correlated to the magnetic performance of the film. In a first study the occurrence of a strongly disturbed surface layer with a lattice spacing that corresponds to cementite is observed. Furthermore the individual particles are surrounded by a thin carbon layer that suppresses the deposition of further material and leads, therefore, to the formation of a second layer of particles. Without a contact to the seed layer these particles are randomly oriented and degrade the magnetic performance of the media. A further study reveals, that a selection of single-crystalline substrates with appropriate lattice mismatch to the L10-ordered unit cell can be applied to avoid the formation of in-plane oriented and L12-ordered crystals. Unfortunately, the required large mismatch results in a broadening of the texture of the [001]-axes of the individual grains. As electron holography studies reveal, the orientation of the magnetization of the individual grains can differ from the structural [001]-axis due to local fluctuations of the uniaxial anisotropy.
139

Optimizing Virtual Machine I/O Performance in Cloud Environments

Lu, Tao 01 January 2016 (has links)
Maintaining closeness between data sources and data consumers is crucial for workload I/O performance. In cloud environments, this kind of closeness can be violated by system administrative events and storage architecture barriers. VM migration events are frequent in cloud environments. VM migration changes VM runtime inter-connection or cache contexts, significantly degrading VM I/O performance. Virtualization is the backbone of cloud platforms. I/O virtualization adds additional hops to workload data access path, prolonging I/O latencies. I/O virtualization overheads cap the throughput of high-speed storage devices and imposes high CPU utilizations and energy consumptions to cloud infrastructures. To maintain the closeness between data sources and workloads during VM migration, we propose Clique, an affinity-aware migration scheduling policy, to minimize the aggregate wide area communication traffic during storage migration in virtual cluster contexts. In host-side caching contexts, we propose Successor to recognize warm pages and prefetch them into caches of destination hosts before migration completion. To bypass the I/O virtualization barriers, we propose VIP, an adaptive I/O prefetching framework, which utilizes a virtual I/O front-end buffer for prefetching so as to avoid the on-demand involvement of I/O virtualization stacks and accelerate the I/O response. Analysis on the traffic trace of a virtual cluster containing 68 VMs demonstrates that Clique can reduce inter-cloud traffic by up to 40%. Tests of MPI Reduce_scatter benchmark show that Clique can keep VM performance during migration up to 75% of the non-migration scenario, which is more than 3 times of the Random VM choosing policy. In host-side caching environments, Successor performs better than existing cache warm-up solutions and achieves zero VM-perceived cache warm-up time with low resource costs. At system level, we conducted comprehensive quantitative analysis on I/O virtualization overheads. Our trace replay based simulation demonstrates the effectiveness of VIP for data prefetching with ignorable additional cache resource costs.
140

Conception, validation et mise en oeuvre d’une architecture de stockage de données de très haute capacité basée sur le principe de la photographie Lippmann / Conception, validation and implementation of a new architecture of high capacity optical storage based on Lippmann's photography

Contreras Villalobos, Kevin 04 February 2011 (has links)
Le stockage de données par holographie suscite un intérêt renouvelé. Il semble bien placé pour conduire à une nouvelle génération de mémoires optiques aux capacités et débits de lecture bien supérieurs à ceux des disques optiques actuels basés sur l’enregistrement dit surfacique. Dans ce travail de thèse, nous proposons une nouvelle architecture de stockage optique de données qui s’inspire du principe de la photographie interférentielle de Lippmann. Les informations y sont inscrites dans le volume du matériau d’enregistrement sous la forme de pages de données par multiplexage en longueur d’onde en exploitant la sélectivité de Bragg. Cette technique, bien que très voisine de l’holographie, n’avait jamais été envisagée pour le stockage à hautes capacités. L’objectif de la thèse a été d’analyser cette nouvelle architecture afin de déterminer les conditions pouvant conduire à de très hautes capacités. Cette analyse s’est appuyée sur un outil de simulation numérique des processus de diffraction en jeu dans cette mémoire interférentielle. Elle nous a permis de définir deux conditions sous lesquelles ces hautes capacités sont atteignables. En respectant ces conditions, nous avons conçu un démonstrateur de mémoire dit de « Lippmann » et avons ainsi démontré expérimentalement que la capacité est bien proportionnelle à l’épaisseur du matériau d’enregistrement. Avec une telle architecture, des capacités de l’ordre du Téraoctet sont attendues pour des disques de 12 cm de diamètre. / Nowadays, the holographic data storage presents a renewed interest. It seems well placed to lead a new generation of optical storage capacity and playback speeds much higher than current optical discs based on the recording onto a surface. In this thesis, we propose a new architecture for optical data storage that is based on the principle of Lippmann photography interferential. Information are included in the volume of the recording material in the form of pages of data multiplexing in wavelength by exploiting the Bragg selectivity. This technique, although very similar to holography, had never been considered for high storage capacities. The aim of the thesis was to analyze this new architecture to determine the conditions that can lead to very high capacities. This analysis was based on a numerical simulation tool of diffraction process involved in this memory interferential. It allowed us to define two conditions under which these high capacities are achievable. In accordance with these conditions, we developed a demonstrator called "Lippmann’s memory" and have thus demonstrated experimentally that the capacity is proportional to the thickness of the recording material. With such an architecture, Terabyte disks of 12 cm in diameter are expected.

Page generated in 0.0769 seconds