• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 5
  • 2
  • 2
  • 1
  • Tagged with
  • 10
  • 10
  • 10
  • 5
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Forensisk undersökning av Solid State Drive / Forensic investigation of Solid State Drive

Ritola, Richard January 2012 (has links)
Solid State Diskar (SSD) är relativt nya och mycket om dem är okänt. Detta examensarbete fokuserar på att läsa forensiskt viktig information från både lagringsutrymmet och reservutrymmet. Detta har försökts genom att ett program har byggts i C++, detta program använder ATA kommandon för att läsa information från disken. Även om programmet aldrig blev färdigt kunde det skicka och ta emot data från en SSD, dock inte reservutrymmet vilket var fokus i detta examensarbete. / Solid State Drives (SSD) are relatively new and not much is known about them. This thesis focuses on retrieving forensically important information, not only from the storage area of the SSD but also from the spare area. This was attempted by writing a program in C++ that, using ATA commands, could read information from the SSD. Although the program was not finished within the given time, it could read some information from the SSD, but not the spare area which was the main focus.
2

Towards a scalable design of video content distribution over the internet

Ryu, Mungyung 21 September 2015 (has links)
We are witnessing a proliferation of video in the Internet; YouTube is the most bandwidth intensive service of today’s Internet. It accounts for 20 - 35% of the Internet traffic with 35 hours of videos uploaded every minute and more than 700 billion playbacks in 2010. Netflix, a web service that streams premium contents such as TV series, shows, and movies, consumes 30% of the network bandwidth in North America at peak time. Recently, leveraging the content distribution networks (CDNs), a new paradigm for video streaming on the Internet has emerged, namely, Adaptive HTTP Streaming (AHS). AHS has become the industry standard for video streaming over the Internet adopted by broadcast networks as well as VoD services such as YouTube, Netflix, Hulu, etc. In the 90’s and early 2000’s, Internet-based video streaming for high-bitrate video was challenging due to hardware limitations. In that era, to cover the hardware limitations, every software component of a video server needed to be carefully optimized to support the real-time guarantees for jitter-free video delivery. However, most of the software solutions have become less important with the remarkable hardware improvements over the past two decades. There is 100× speedup in CPU speeds; RAM capacity has increased by 1,000×; hard disk drive (HDD) capacity has grown by 10,000×. Today, CPU is no longer a bottleneck for video streaming. On the other hand, storage bandwidth and network bandwidth are still serious bottlenecks for large scale on-demand video streaming. In this dissertation, we aim at a scalable video content distribution system that addresses both storage bottleneck and network bottleneck. The first part of the dissertation pertains to the storage system on the server side: A multi-tiered storage system that exploits a flash memory solid-state drive (SSD) can meet the bandwidth needs in a much more cost- effective way than a traditional two-tier storage system. We first identify the challenges in architecting such a system given the performance quirks of flash-based SSDs, and the lim- itations of state-of-the-art multi-tiered storage systems for video streaming. Armed with the knowledge of these challenges, we show how to construct such a storage system and implement a real web server with multi-tiered storage, evaluate the system with AHS work- loads, and demonstrate significant performance gains while reducing the TCO. The second part of the dissertation pertains to the network system on the client side: Integrating peer- to-peer (P2P) technology with the client-server paradigm results in a much more scalable video content distribution system. AHS is a paradigm for client-driven video streaming; its philosophy matches well with that of P2P video streaming. An adaptation mechanism is the most important component of AHS that determines overall video streaming quality and user experience. We show a throughput-smoothing-based adaptation mechanism that is designed for a client-server architecture does not work well for a P2P architecture. We pro- vide a buffer-based adaptation mechanism, evaluate our solution with OMNeT++/INET simulator, and demonstrate significant performance gains.
3

Android Application Context Aware I/O Scheduler

January 2014 (has links)
abstract: Android has been the dominant platform in which most of the mobile development is being done. By the end of the second quarter of 2014, 84.7 percent of the entire world mobile phones market share had been captured by Android. The Android library internally uses the modified Linux kernel as the part of its stack. The I/O scheduler, is a part of the Linux kernel, responsible for scheduling data requests to the internal and the external memory devices that are attached to the mobile systems. The usage of solid state drives in the Android tablet has also seen a rise owing to its speed of operation and mechanical stability. The I/O schedulers that exist in the present Linux kernel are not better suited for handling solid state drives in particular to exploit the inherent parallelism offered by the solid state drives. The Android provides information to the Linux kernel about the processes running in the foreground and background. Based on this information the kernel decides the process scheduling and the memory management, but no such information exists for the I/O scheduling. Research shows that the resource management could be done better if the operating system is aware of the characteristics of the requester. Thus, there is a need for a better I/O scheduler that could schedule I/O operations based on the application and also exploit the parallelism in the solid state drives. The scheduler proposed through this research does that. It contains two algorithms working in unison one focusing on the solid state drives and the other on the application awareness. The Android application context aware scheduler has the features of increasing the responsiveness of the time sensitive applications and also increases the throughput by parallel scheduling of request in the solid state drive. The suggested scheduler is tested using standard benchmarks and real-time scenarios, the results convey that our scheduler outperforms the existing default completely fair queuing scheduler of the Android. / Dissertation/Thesis / Masters Thesis Computer Science 2014
4

Disková pole RAID a jejich budoucnost v éře SSD / Future of disk arrays in SSD era

Sládek, Petr January 2012 (has links)
The thesis aims at verification of using emerging Solid-State drives in disk arrays. The advent of SSD disks caused a small revolution in area of data storage, because the growth performance of hard drives has been slow compared to other PC components. But an entirely different principle of operation could mean compatibility problems between SSD and related technologies, such as RAID. This thesis aims at analyzing all the relevant technologies, mainly HDD, SSD and RAID. To achieve this objective, information from literature, articles and other appropriate sources will be used. Other objectives of this thesis are to determine how much are the SSDs suitable for use in the disk array, because low performance RAID controllers or different principles of operation could limit their efficiency. This question should be answered by submission of selected types of storage arrays to synthetic and practical tests of performance. The final goal is to use financial analysis of the test solutions as a shared file storage. Today, remote access to data is used by a wide range of job positions. Slow storage could mean inefficient use of working time and therefore unnecessary financial costs. The goal of my work is primarily to provide answers to the questions mentioned above. Currently it is very hard to find tests of more complex forms of disk arrays based on solid-state drives. This article can be also very useful for companies where fileservers are used to share user data. Based on the result of cost analysis, the company can then decide what type of storage is best for its purpose.
5

On Performance Optimization and System Design of Flash Memory based Solid State Drives in the Storage Hierarchy

Chen, Feng 28 September 2010 (has links)
No description available.
6

A high-throughput in-memory index, durable on flash-based SSD

Kissinger, Thomas, Schlegel, Benjamin, Böhm, Matthias, Habich, Dirk, Lehner, Wolfgang 14 February 2013 (has links) (PDF)
Growing memory capacities and the increasing number of cores on modern hardware enforces the design of new in-memory indexing structures that reduce the number of memory transfers and minimizes the need for locking to allow massive parallel access. However, most applications depend on hard durability constraints requiring a persistent medium like SSDs, which shorten the latency and throughput gap between main memory and hard disks. In this paper, we present our winning solution of the SIGMOD Programming Contest 2011. It consists of an in-memory indexing structure that provides a balanced read/write performance as well as non-blocking reads and single-lock writes. Complementary to this index, we describe an SSD-optimized logging approach to fit hard durability requirements at a high throughput rate.
7

SSDs påverkan på MySQL: En prestandajämförelse

Carlsson, Jacob, Gashi, Edison January 2012 (has links)
Solid State Drives (SSD) blir idag allt vanligare som lagringsmedium och håller på att bli ett alternativ till magnetiska hårddiskar. Denna studie+ har undersökt hur man på bästa sätt kan utnjytta SSDer i en MySQL-databas. Undersökningen genomfördes med hjälp av experiment där prestandamätningar gjordes för att få en klar bild på i vilken konfiguration av SSDs som ger bäst prestanda i MySQL. Mätverktygen som användes var sql-bench och mysqlslap.   Resultaten visar att en databas på en enskild SSD presterar lika bra som en databas med  SSD-cache  under  majoriteten  av  mätningarna  och  visar  bättre  resultat  än resterande konfigurationerna som var en databas på hårddisk och transaktionsloggen på en SSD. / Solid State Drives (SSD) are now becoming  more common as storage and is about to  become  an  alternative  to  magnetic  disks.  This  report  studied  how  to  best  utilize SSDs in a MySQL database. This study was carried out using experiments in which performance benchmarks were made to get an accurate view on which configuration of  SSDs  that  gives  the  best  performance  in  MySQL.  The  benchmarks  where  made with sql-bench and mysqlslap.   The  results  indicate  that  a  database  using  only  SSD  storage  performs  equal  to  a database of solid-state cache under the majority of the tests and shows better results than the remaining configurations that include a database on a single hard drive and a configuration with the transaction log on a SSD.
8

A high-throughput in-memory index, durable on flash-based SSD: Insights into the winning solution of the SIGMOD programming contest 2011

Kissinger, Thomas, Schlegel, Benjamin, Böhm, Matthias, Habich, Dirk, Lehner, Wolfgang January 2012 (has links)
Growing memory capacities and the increasing number of cores on modern hardware enforces the design of new in-memory indexing structures that reduce the number of memory transfers and minimizes the need for locking to allow massive parallel access. However, most applications depend on hard durability constraints requiring a persistent medium like SSDs, which shorten the latency and throughput gap between main memory and hard disks. In this paper, we present our winning solution of the SIGMOD Programming Contest 2011. It consists of an in-memory indexing structure that provides a balanced read/write performance as well as non-blocking reads and single-lock writes. Complementary to this index, we describe an SSD-optimized logging approach to fit hard durability requirements at a high throughput rate.
9

Disk na bázi paměti FLASH / Disk Drive Based on FLASH Memory

Dvořák, Miroslav January 2012 (has links)
The work deals with flash technology, the history of its development, current application of this technology and discusses the advantages and disadvantages of flash memories. It describes the integration of flash technology into mass storage devices and commonly used mechanisms that suppress the flash shortcomings for such application. The next part of the work focuses on analysis of commonly used buses for flash storage devices. Based on these theoretical foundations, text presents way to develop own flash based disk. The work focuses mainly on finding the most accessible platform for connecting the disk to personal computers - USB, on PCB design for storage module in Eagle CAD and implementation of necessary firmware for MCU and VHDL design for FPGA, that provide the disk functionality. At the end the work summarizes the results and outlines the way of further development.
10

Quantification of emissions in the ICT sector – a comparative analysis of the Product Life Cycle Assessment and Spend-based methods. : Optimal value chain accounting (Scope 3, category 1)

Rajesh Jha, Abhishek kumar January 2022 (has links)
Considering the rapid increase in the ICT (Information & Communication Technology) products in use, there is a risk of an increase in GHG emissions and electronic waste accumulation in the ICT sector. Therefore, it becomes important to account for the emissions in the ICT sector in order to take steps to mitigate them. There are several methods put forward under ETSI, ITU-T, GHG protocol, etc., which can be used to measure the emissions in the ICT sector. Two such methods are Product Life Cycle Assessment (PLCA) and Spend-based, which are used in this study to account for scope 3, category 1 emissions in the ICT sector. Scope 3, category 1 emissions are released during the raw material acquisition and part production phase of the ICT product’s life cycle and account for a major portion of the overall emissions. As the ICT sector is a very huge field of study in itself, two ICT products, namely smartphones and laptops, are considered in this study to calculate their overall scope 3, category 1 emissions. A list of influential components in smartphones and laptops is defined to be included in the Excel Management Life Cycle Assessment (EMLCA) tool to calculate the scope 3, category 1 emissions. A comprehensive comparison between PLCA and Spend-based methods is also studied during the process of calculating their emissions. These observations are then used to make critical analyses and compare the two methods under results and discussions based on various parameters described under them. Both the methods were found to be suitable for calculating the emissions, with some uncertainty, although the Spend-based method was a quicker approach to do so. The PLCA method, although more complex, was found to be more suitable for ICT product eco-design. Both methods required a different set of primary data and were sensitive to various components in smartphones and laptops. This study illustrates the parameters that affect PLCA and Spend-based methods and discusses the pros and cons of them depending on the situations they are used in.

Page generated in 0.1015 seconds