• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 55
  • 12
  • 8
  • 5
  • 4
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 174
  • 174
  • 73
  • 66
  • 66
  • 58
  • 29
  • 26
  • 24
  • 23
  • 22
  • 20
  • 19
  • 19
  • 19
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Low complexity bit-level soft-decision decoding for Reed-Solomon codes

Oh, Min-seok January 1999 (has links)
Reed-Solomon codes (RS codes) are an important method for achieving error-correction in communication and storage systems. However, it has proved difficult to find a soft-decision decoding method which has low complexity. Moreover, in some previous soft-decision decoding approaches, bit-level soft-decision information could not be employed fully. Even though RS codes have powerful error correction capability, this is a critical shortcoming. This thesis presents bit-level soft-decision decoding schemes for RS codes. The aim is to design a low complexity sequential decoding method based on bit-level soft- decision information approaching maximum likelihood performance. Firstly a trellis decoding scheme which gives easy implementation is introduced, since the soft-decision information can be used directly. In order to allow bit-level soft-decision, a binary equivalent code is introduced and Wolf's method is used to construct the binary-trellis from a systematic parity check matrix. Secondly, the Fano sequential decoding method is chosen, which is sub-optimal and adaptable to channel conditions. This method does not need a large amount of storage to perform an efficient trellis search. The Fano algorithm is then modified to improve the error correcting performance. Finally, further methods of complexity reduction are presented without loss of decoding performance, based on reliability-first search decoding using permutation groups for RS codes. Compared with the decoder without permutation, those schemes give a large complexity reduction and performance improvement approaching near maximum likelihood performance. In this thesis, three types of permutation, cyclic, squaring and hybrid permutation, are presented and the decoding methods using them are implemented.
12

Flash Caching for Cloud Computing Systems

Arteaga Clavijo, Dulcardo Ariel 18 March 2016 (has links)
As the size of cloud systems and the number of hosted virtual machines (VMs) rapidly grow, the scalability of shared VM storage systems becomes a serious issue. Client-side flash-based caching has the potential to improve the performance of cloud VM storage by employing flash storage available on the VM hosts to exploit the locality inherent in VM IOs. However, there are several challenges to the effective use of flash caching in cloud systems. First, cache configurations such as size, write policy, metadata persistency and RAID level have significant impacts on flash caching. Second, the typical capacity of flash devices is limited compared to the dataset size of consolidated VMs. Finally, flash devices wear out and face serious endurance issues which are aggravated by the use for caching. This dissertation presents the research for addressing these problems of cloud flash caching in the following three aspects. First, it presents a thorough study of different cache configurations including a new cache-optimized RAID configuration using a large amount of long-term traces collected from real-world public and private clouds. Second, it studies an on-demand flash cache management solution for meeting VM cache demands and minimizing device wear-out. It uses a new cache demand model Reuse Working Set (RWS) to capture the data with good temporal locality, and uses the RWS size (RWSS) to model a workload?s cache demand. Finally, to handle situations where a cache is insufficient for VMs? demands, it employs dynamic cache migration to balance cache load across hosts by live migrating cached data along with the VMs. The results show that the cache-optimized RAID improves performance by 137% without sacrificing reliability, compared to traditional RAID. The RWSS-based on-demand cache allocation reduces workload?s cache usage by 78% and lowers the amount of writes sent to cache device by 40%, compared to traditional working set based cache allocation. Combining on-demand cache allocation with dynamic cache migration for 12 concurrent VMs, results show 28% higher hit ratio and 28% lower 90th percentile IO latency, compared to the case without cache allocation.
13

Improving Caches in Consolidated Environments

Koller, Ricardo 24 July 2012 (has links)
Memory (cache, DRAM, and disk) is in charge of providing data and instructions to a computer’s processor. In order to maximize performance, the speeds of the memory and the processor should be equal. However, using memory that always match the speed of the processor is prohibitively expensive. Computer hardware designers have managed to drastically lower the cost of the system with the use of memory caches by sacrificing some performance. A cache is a small piece of fast memory that stores popular data so it can be accessed faster. Modern computers have evolved into a hierarchy of caches, where a memory level is the cache for a larger and slower memory level immediately below it. Thus, by using caches, manufacturers are able to store terabytes of data at the cost of cheapest memory while achieving speeds close to the speed of the fastest one. The most important decision about managing a cache is what data to store in it. Failing to make good decisions can lead to performance overheads and over- provisioning. Surprisingly, caches choose data to store based on policies that have not changed in principle for decades. However, computing paradigms have changed radically leading to two noticeably different trends. First, caches are now consol- idated across hundreds to even thousands of processes. And second, caching is being employed at new levels of the storage hierarchy due to the availability of high-performance flash-based persistent media. This brings four problems. First, as the workloads sharing a cache increase, it is more likely that they contain dupli- cated data. Second, consolidation creates contention for caches, and if not managed carefully, it translates to wasted space and sub-optimal performance. Third, as contented caches are shared by more workloads, administrators need to carefully estimate specific per-workload requirements across the entire memory hierarchy in order to meet per-workload performance goals. And finally, current cache write poli- cies are unable to simultaneously provide performance and consistency guarantees for the new levels of the storage hierarchy. We addressed these problems by modeling their impact and by proposing solu- tions for each of them. First, we measured and modeled the amount of duplication at the buffer cache level and contention in real production systems. Second, we created a unified model of workload cache usage under contention to be used by administrators for provisioning, or by process schedulers to decide what processes to run together. Third, we proposed methods for removing cache duplication and to eliminate wasted space because of contention for space. And finally, we pro- posed a technique to improve the consistency guarantees of write-back caches while preserving their performance benefits.
14

A Neuroimaging Web Interface for Data Acquisition, Processing and Visualization of Multimodal Brain Images

Lizarraga, Gabriel M 12 October 2018 (has links)
Structural and functional brain images are generated as essential modalities for medical experts to learn about the different functions of the brain. These images are typically visually inspected by experts. Many software packages are available to process medical images, but they are complex and difficult to use. The software packages are also hardware intensive. As a consequence, this dissertation proposes a novel Neuroimaging Web Services Interface (NWSI) as a series of processing pipelines for a common platform to store, process, visualize and share data. The NWSI system is made up of password-protected interconnected servers accessible through a web interface. The web-interface driving the NWSI is based on Drupal, a popular open source content management system. Drupal provides a user-based platform, in which the core code for the security and design tools are updated and patched frequently. New features can be added via modules, while maintaining the core software secure and intact. The webserver architecture allows for the visualization of results and the downloading of tabulated data. Several forms are ix available to capture clinical data. The processing pipeline starts with a FreeSurfer (FS) reconstruction of T1-weighted MRI images. Subsequently, PET, DTI, and fMRI images can be uploaded. The Webserver captures uploaded images and performs essential functionalities, while processing occurs in supporting servers. The computational platform is responsive and scalable. The current pipeline for PET processing calculates all regional Standardized Uptake Value ratios (SUVRs). The FS and SUVR calculations have been validated using Alzheimer's Disease Neuroimaging Initiative (ADNI) results posted at Laboratory of Neuro Imaging (LONI). The NWSI system provides access to a calibration process through the centiloid scale, consolidating Florbetapir and Florbetaben tracers in amyloid PET images. The interface also offers onsite access to machine learning algorithms, and introduces new heat maps that augment expert visual rating of PET images. NWSI has been piloted using data and expertise from Mount Sinai Medical Center, the 1Florida Alzheimer’s Disease Research Center (ADRC), Baptist Health South Florida, Nicklaus Children's Hospital, and the University of Miami. All results were obtained using our processing servers in order to maintain data validity, consistency, and minimal processing bias.
15

A performance evaluation of peer-to-peer storage systems

Mutombo, Deya Mwembya 14 February 2013 (has links)
This work evaluates the performance of Peer-to-Peer storage systems in structured Peer-to-Peer (P2P) networks under the impacts of a continuous process of nodes joining and leaving the network (Churn). Based on the Distributed Hash Tables (DHT), the peer-to-peer systems provide the means to store data among a large and dynamic set of participating host nodes. We consider the fact that existing solutions do not tolerate a high Churn rate or are not really scalable in terms of number of stored data blocks. The considered performance metrics include number of data blocks lost, bandwidth consumption, latencies and distance of matched lookups. We have selected Pastry, Chord and Kademlia to evaluate the e ect of inopportune connections/disconnections in Peer-to-Peer storage systems, because these selected P2P networks possess distinctive characteristics. Chord is one of the rst structured P2P networks that implements Distributed Hash Tables (DHTs). Similar to Chord, Pastry is based on a ring structure, with the identi er space forming the ring. However, Pastry uses a di erent algorithm than Chord to select the overlay neighbors of a peer. Kademlia is a more recent structured P2P network, with the XOR mechanism for improving distance calculation. DHT deployments are characterized by Churn. But if the frequency of Churn is too high, data blocks can be lost and lookup mechanism begin to incur delays. In architectures that employ DHTs, the choice of algorithm for data replication and maintenance can have a signi cant impact on the performance and reliability. PAST is a persistent Peer-to-Peer storage utility, which replicates complete les on multiple nodes, and uses Pastry for message routing and content location. The hypothesis is that by enhancing the Churn tolerance through building a really e cient replication and maintenance mechanisms, it will: i) Operate better than a peer-to-peer storage system such as PAST especially in replica placement strategy with a fewer data transfers. ii) Resolve le lookups with a match that is closer to the source peer, thus con- serving bandwidth. Our research will involve a series of simulation studies using two network simulators OverSim and OMNeT++. The main results are: Our approach achieves a higher data availability in presence of Churn, than the original PAST replication strategy; For a Churn occuring every minute our strategy loses two times less blocks than PAST; Our replication strategy induces an average of twice less block transfers than PAST.
16

Mathematical modelling and control of renewable energy systems and battery storage systems

Wijewardana, Singappuli M. January 2017 (has links)
Intermittent nature of renewable energy sources like the wind and solar energy poses new challenges to harness and supply uninterrupted power for consumer usage. Though, converting energy from these sources to useful forms of energy like electricity seems to be promising, still, significant innovations are needed in design and construction of wind turbines and PV arrays with BS systems. The main focus of this research project is mathematical modelling and control of wind turbines, solar photovoltaic (PV) arrays and battery storage (BS) systems. After careful literature review on renewable energy systems, new developments and existing modelling and controlling methods have been analysed. Wind turbine (WT) generator speed control, turbine blade pitch angle control (pitching), harnessing maximum power from the wind turbines have been investigated and presented in detail. Mathematical modelling of PV arrays and how to extract maximum power from PV systems have been analysed in detail. Application of model predictive control (MPC) to regulate the output power of the wind turbine and generator speed control with variable wind speeds have been proposed by formulating a linear model from a nonlinear mathematical model of a WT. Battery chemistry and nonlinear behaviour of battery parameters have been analysed to present a new equivalent electrical circuit model. Converting the captured solar energy into useful forms, and storing it for future use when the Sun itself is obscured is implemented by using battery storage systems presenting a new simulation model. Temperature effect on battery cells and dynamic battery pack modelling have been described with an accurate state of charge estimation method. The concise description on power converters is also addressed with special reference to state-space models. Bi-directional AC/DC converter, which could work in either rectifier or inverter modes is described with a cost effective proportional integral derivative (PID/State-feedback) controller.
17

AUTOMATIC GENERATION OF WEB APPLICATIONS AND MANAGEMENT SYSTEM

Zhou, Yu 01 March 2017 (has links)
One of the major difficulties in web application design is the tediousness of constructing new web pages from scratch. For traditional web application projects, the web application designers usually design and implement web application projects step by step, in detail. My project is called “automatic generation of web applications and management system.” This web application generator can generate the generic and customized web applications based on software engineering theories. The flow driven methodology will be used to drive the project by Business Process Model Notation (BPMN). Modules of the project are: database, web server, HTML page, functionality, financial analysis model, customer, and BPMN. The BPMN is the most important section of this entire project, due to the BPMN flow engine that most of the work and data flow depends on the engine. There are two ways to deal with the project. One way is to go to the main page, then to choose one web app template, and click the generating button. The other way is for the customers to request special orders. The project then will give suitable software development methodologies to follow up. After a software development life cycle, customers will receive their required product.
18

Using System Structure and Semantics for Validating and Optimizing Performance of Multi-tier Storage Systems

Soundararajan, Gokul 01 September 2010 (has links)
Modern persistent storage systems must balance two competing imperatives: they must meet strict application-level performance goals and they must reduce the operating costs. The current techniques of either manual tuning by administrators or by over-provisioning resources are either time-consuming or expensive. Therefore, to reduce the costs of management, automated performance-tuning solutions are needed. To address this need, we develop and evaluate algorithms centered around the key thesis that a holistic semantic-aware view of the application and system is needed for automatically tuning and validating the performance of multi-tier storage systems. We obtain this global system view by leveraging structural and semantic information available at each tier and by making this information available to all tiers. Specifically, we develop two key build- ing blocks: (i) context-awareness, where information about the application structure and semantics is exchanged between the tiers, and (ii) dynamic performance models that use the structure of the system to build lightweight resource-to-performance mappings quickly. We implement a prototype storage system, called Akash, based on commodity components. This prototype enables us to study all above scenarios in a realistic rendering of a modern multi-tier storage system. We also develop a runtime tool, Dena, to analyze the performance and behaviour of multi-tier server systems. We apply these tools and techniques in three real-world scenarios. First, we leverage application context-awareness at the storage server in order to improve the performance of I/O prefetching. Tracking application access patterns per context enables us to improve the prediction accuracy for future access patterns, over existing algorithms, where the high interleaving of I/O accesses from different contexts make access patterns hard to recognize. Second, we build and leverage dynamic performance models for resource allocation, providing consistent and predictable performance, corresponding to pre-determined application goals. We show that our dynamic resource allocation algorithms minimize the interference effects between e-commerce applications sharing a common infrastructure. Third, we introduce a high-level paradigm for interactively validating system performance by the system administrator. The administrator leverages existing performance models and other semantic knowledge about the system in order to discover bottlenecks and other opportunities for performance improvements. Our evaluation shows that our techniques enable significant improvements in performance over current approaches.
19

Using System Structure and Semantics for Validating and Optimizing Performance of Multi-tier Storage Systems

Soundararajan, Gokul 01 September 2010 (has links)
Modern persistent storage systems must balance two competing imperatives: they must meet strict application-level performance goals and they must reduce the operating costs. The current techniques of either manual tuning by administrators or by over-provisioning resources are either time-consuming or expensive. Therefore, to reduce the costs of management, automated performance-tuning solutions are needed. To address this need, we develop and evaluate algorithms centered around the key thesis that a holistic semantic-aware view of the application and system is needed for automatically tuning and validating the performance of multi-tier storage systems. We obtain this global system view by leveraging structural and semantic information available at each tier and by making this information available to all tiers. Specifically, we develop two key build- ing blocks: (i) context-awareness, where information about the application structure and semantics is exchanged between the tiers, and (ii) dynamic performance models that use the structure of the system to build lightweight resource-to-performance mappings quickly. We implement a prototype storage system, called Akash, based on commodity components. This prototype enables us to study all above scenarios in a realistic rendering of a modern multi-tier storage system. We also develop a runtime tool, Dena, to analyze the performance and behaviour of multi-tier server systems. We apply these tools and techniques in three real-world scenarios. First, we leverage application context-awareness at the storage server in order to improve the performance of I/O prefetching. Tracking application access patterns per context enables us to improve the prediction accuracy for future access patterns, over existing algorithms, where the high interleaving of I/O accesses from different contexts make access patterns hard to recognize. Second, we build and leverage dynamic performance models for resource allocation, providing consistent and predictable performance, corresponding to pre-determined application goals. We show that our dynamic resource allocation algorithms minimize the interference effects between e-commerce applications sharing a common infrastructure. Third, we introduce a high-level paradigm for interactively validating system performance by the system administrator. The administrator leverages existing performance models and other semantic knowledge about the system in order to discover bottlenecks and other opportunities for performance improvements. Our evaluation shows that our techniques enable significant improvements in performance over current approaches.
20

Flow batteries : Status and potential

Dumancic, Dominik January 2011 (has links)
New ideas and solutions are necessary to face challenges in the electricity industry. The application of electricity storage systems (ESS) can improve the quality and stability of the existing electricity network. ESS can be used for peak shaving, instead of installing new generation or transmission units, renewable energy time-shift and many other services. There are few ESS technologies existing today: mechanical, electrical and electrochemical storage systems. Flow batteries are electrochemical storage systems which use electrolyte that is stored in a tank separated from the battery cell. Electrochemistry is very important to understand how a flow battery functions and how it stores electric energy. The functioning of a flow battery is based on reduction and oxidation reactions in the cell. To estimate the voltage of a cell the Nernst equation is used. It tells how the half-cell potential changes depending on the change of concentration of a substance involved in an oxidation or reduction reaction. The first flow battery was invented in the 1880’s, but was forgotten for a long time. Further development was revived in the 1950’s and 1970’s. A flow battery consists of two parallel electrodes separated by an ion exchange membrane, forming two half-cells. The electro-active materials are stored externally in an electrolyte and are introduced into the device only during operation. The vanadium redox battery (VRB) is based on the four possible oxidation states of vanadium and has a standard potential of 1.23 V. Full ionic equations of the VRB include protons, sulfuric acid and the corresponding salts. The capital cost of a VRB is approximately 426 $/kW and 100 $/kWh. Other flow batteries are polysulfide-bromine, zinc bromine, vanadium-bromine, iron-chromium, zinc-cerium, uranium, neptunium and soluble lead-acid redox flow batteries. Flow batteries have long cycle life and quick response times, but are complicated in comparison with other batteries. / Nya idéer och lösningar är nödvändiga för att möta utmaningarna i elbranschen. Användningen av elektriskt lagringssystem (ESS) kan förbättra kvalitén och stabiliteten av det nuvarande elnätet. ESS kan användas till toppbelastningsutjämning, istället för att installera nya produktions eller kraft överförnings enheter, förnybar energi tidsförskjutning och många andra tjänster. I dagsläget finns det få olika ESS: Mekaniska, elektriska och elektrokemiska lagringssystem. Flödesbatterier tillhör kategorin elektrokemiska lagringssystem som använder sig utav elektrolyt som är lagrad i en tank separerad från battericellen. För att kunna förstå hur flödesbatteriernas funktioner och på vilket sätt som dem lagrar elektriskt energi är det viktigt att kunna elektrokemi. Flödesbatteriernas funktion är baserad på reduktions och oxidations reaktioner i cellen. Nernsts ekvation används för att kunna uppskatta voltantalet i en cell. Nernsts ekvation säger hur halvcell potentialen ändras beroende av ändringen av koncentrationen av ämnet involverat i oxidations eller reduktions reaktionen. Det första flödesbatteriet uppfanns 1880-talet, men blev bortglömt under en lång tid. Vidare utveckling förnyades under 1950 och 1970-talet. Ett flödesbatteri består utav två parallella elektroder som är separerade utav ett jonbytes membran vilket formar två halvceller. Dem elektroaktiva materialen är lagrade externt i elektrolyt och är införs bara i anordningen under användning. Vanadium redox batteriet (VRB) är baserat på dem fyra möjliga oxidations tillstånden av vanadium och har en standard potential på 1.23 V. Fullt joniska ekvationer av VRB inkluderar protoner, svavelsyra och deras motsvarande salter. Kapitalkostnaden av ett VRB är ungefär 426 $/kW och 100 $/kWh. Det finna andra flödesbatterier som är polysulfide-brom, zink-brom, vanadium-brom, järn-krom, uran, neptunium och löslig blysyre redox flödesbatterier. Flödesbatterier har en lång omloppstid samt en snabb svarstid men är komplicerade jämfört med andra batterier.

Page generated in 0.0582 seconds