Spelling suggestions: "subject:"atorage lemsystems"" "subject:"atorage atemsystems""
61 |
The RMT (Recursive multi-threaded) tool: A computer aided software engineeering tool for monitoring and predicting software development progressLin, Chungping 01 January 1998 (has links)
No description available.
|
62 |
Design and implementation of car rental systemAbdel-Jaber, Fadi Fayez 01 January 2001 (has links)
When someone wants to rent a car, the customer will usually think twice about the company from which they want to rent. The decision will be based on factors such as good rates, quality and customer service. The service the company representative offers the client should be fast, clear and accurate. This goal cannot be achieved without an informative system that will enable the customer representative to answer the various questions the client might have.
|
63 |
Eliminating Data Redundancy: Our Solution for Database Discovery using Alma/PrimoKindle, Jacob, Clamon, Travis 05 May 2016 (has links)
East Tennessee State University recently adopted Alma & Primo and was suprised by the lack of an A-Z database discovery module. Frustrated by having to maintain electronic resources separately on our library website and in Alma, we embarked on a goal to eliminate redundancy and use Alma/Primo exclusively. This presentation will cover our entire workflow in both Alma & Primo and the issues we encountered along the way. I'll first go over our process in Alma including MARC record creation, electronic collection setup, and the top level collection module. Next, I'll cover our workflow in Primo including normalization rules, scoping, PNX display, facets, and code table changes. The last section will cover the Primo X-Services API and how it was developed into an A-Z Database list.
|
64 |
DecaFS: A Modular Distributed File System to Facilitate Distributed Systems EducationMeth, Halli Elaine 01 June 2014 (has links)
Data quantity, speed requirements, reliability constraints, and other factors encourage industry developers to build distributed systems and use distributed services. Software engineers are therefore exposed to distributed systems and services daily in the workplace. However, distributed computing is hard to teach in Computer Science courses due to the complexity distribution brings to all problem spaces. This presents a gap in education where students may not fully understand the challenges introduced with distributed systems. Teaching students distributed concepts would help better prepare them for industry development work.
DecaFS, Distributed Educational Component Adaptable File System, is a modular distributed file system designed for educational use. The goal of the system is to teach distributed computing concepts to undergraduate and graduate level students by allowing them to develop small, digestible portions of the system. The system is broken up into layers, and each layer is broken up into modules so that students can build or modify different components in small, assignment- sized portions. Students can replace modules or entire layers by following the DecaFS APIs and recompiling the system. This allows the behavior of the DFS (Distributed File System) to change based on student implementation, while providing base functionality for students to work from.
Our implementation includes a code base of core DecaFS Modules that students can work from and basic implementations of non-core DecaFS Modules. Our basic non-core modules can be modified to implement more complex distribution techniques without modifying core modules. We have shown the feasibility of developing a modular DFS, while adhering to requirements such as configurable sizes (file, stripe, chunk) and support of multiple data replication strategies.
|
65 |
PolyFS VisualizerFallon, Paul Martin 01 June 2016 (has links)
One of the most important operating system topics, file systems, control how we store and access data and form a key point in a computer scientists understanding of the underlying mechanisms of a computer. However, file systems, with their abstract concepts and lack of concrete learning aids, is a confusing subjects for students. Historically at Cal Poly, the CPE 453 Introduction to Operating Systems has been on of the most failed classes in the computing majors, leading to the need for better teaching and learning tools. Tools allowing students to gain concrete examples of abstract concepts could be used to better prepare students for industry.
The PolyFS Visualizer is a block level file system visualization service built for the PolyFS and TinyFS file systems design specifications currently used by some of professors teaching CPE 453. The service allows students to easily view the blocks of their file system and see metadata, the blocks binary content and the interlinked structure. Students can either compile their file system code with a provided block emulation library to build their disk on a remote server and make use of a visualization website or place the file mounted as their file system directly into the visualization service to view it locally. This allows students to easily view, debug and explore their implementation of a file system to understand how different design decisions affect its operation.
The implementation includes three main components: a disk emulation library in C for compilation with students code, a node JS back-end to handle students file systems and block operations and a read only visualization service. We have conducted two surveys of students in order to determine the usefulness of the PolyFS Visualizer. Students responded that the use of the PolyFS visualizer helps with the PolyFS file system design project and has several ideas for future features and expansions.
|
66 |
Quantifying Parkinson's Disease Symptoms Using Mobile DevicesAylward, Charles R 01 December 2016 (has links)
Current assessments for evaluating the progression of Parkinson’s Disease are largely qualitative and based on small sets of data obtained from occasional doctor-patient interactions. There is a clinical need to improve the techniques used for mitigating common Parkinson’s Disease symptoms. Available data sets for researching the disease are minimal, hindering advancement toward understanding the underlying causes and effectiveness of treatment and therapies. Mobile devices present an opportunity to continuously monitor Parkinson’s Disease patients and collect important information regarding the severity of symptoms. The evolution of digital technology has opened doors for clinical research to extend beyond the clinic by incorporating complex sensors in commonly used devices. Leveraging these sensors to quantify characteristic Parkinson’s Disease symptoms may drastically improve patient care and the reliability of symptom assessment.
The goal of this project is to design and develop a system for measuring and analyzing the cardinal symptoms of Parkinson’s using mobile devices. An application for the iPhone and Apple Watch is developed, utilizing the sensors on the devices to collect data during the performance of motor tasks. Assessments for tremor, bradykinesia, and postural instability are implemented to mimic UPDRS evaluations normally performed by a neurologist. The application connects to a cloud-based server to transfer the collected data for remote access and analysis. Example MatLab analysis demonstrates potential approaches for extracting meaningful data to be used for monitoring the progression of Parkinson’s Disease and the effectiveness of treatment and therapies. High-level verification testing is performed to show general efficacy of the assessment tasks. The system design successfully lays the groundwork for a mobile device-based assessment tool to objectively measure Parkinson’s Disease symptoms
|
67 |
Towards Malleable Distributed Storage Systems˸ From Models to Practice / Malléabilité des Systèmes de Stockage Distribués ˸ Des Modèles à la PratiqueCheriere, Nathanaël 05 November 2019 (has links)
Le Cloud, avec son modèle économique, offre la possibilité d’un gestion élastique des ressources; les utilisateurs peuvent louer des ressources selon leurs besoins. Cette élasticité permet de réduire les coûts énergétiques et financiers, et aide les applications à s’adapter aux charges de travail variables.Les applications manipulant de grandes quantités de données exécutées dans le Cloud ou sur des supercalculateurs sont souvent colocalisées avec un système de stockage distribué pour garantir un accès rapide aux données. Bien que de nombreux travaux aient été proposés pour redimensionner dynamiquement les capacités de calcul pour s’ajuster à la charge de travail, le stockage n’est pas considéré comme malléable (capable d’être redimensionné dynamiquement) puisque les transferts de grandes quantités de données nécessaires sont considérés trop lents. Cependant, le matériel et les techniques de stockage ont évolué et cette hypothèse doit être réévaluée.Dans cette thèse, nous présentons une étude sous différents angles des opérations de redimensionnement des systèmes de stockage distribués.Nous commençons par modéliser la durée minimale de ces opérations pour évaluer leur vitesse potentielle. Puis, nous développons un benchmark conçu pour mesurer la viabilité de la malléabilité d’un système de stockage sur une plateforme donnée. Finalement, nous implémentons un gestionnaire d’opérations de redimensionnement pour systèmes de stockage distribués qui décide et organise les transferts de données requis par ces opérations. / The Cloud, with its pay-as-you-go model, gives the possibility of elastic resource management; users can claim and release resources as needed. This elasticity leads to financial and energetical cost reductions, and helps applications to cope with varying workloads.Distributed cloud and HPC applications processing large amounts of data are often co-located with a distributed storage system in order to ensure fast data accesses. Although many works have been proposed to dynamically rescale the processing part of such systems to match their workload, the storage is never considered as malleable (able to be dynamically rescaled) since moving massive amounts of data around is assumed to be too slow in practice. However, in recent years hardware and storage techniques have evolved and this assumption needs to be revisited.In this thesis, we present a study of the rescaling operations in distributed storage systems approached from different angles. We start by modeling the minimal duration of rescaling operations to estimate their potential speed. Then, we develop a benchmark to measure the viability of distributed storage system malleability on a given platform. Last, we implement a rescaling manager for distributed storage systems that decides and organizes the data transfers required during a rescaling operation.
|
68 |
Self-Healing Cellular Automata to Correct Soft Errors in Defective Embedded Program MemoriesVoddi, Varun 01 December 2009 (has links)
Static Random Access Memory (SRAM) cells in ultra-low power Integrated Circuits (ICs) based on nanoscale Complementary Metal Oxide Semiconductor (CMOS) devices are likely to be the most vulnerable to large-scale soft errors. Conventional error correction circuits may not be able to handle the distributed nature of such errors and are susceptible to soft errors themselves. In this thesis, a distributed error correction circuit called Self-Healing Cellular Automata (SHCA) that can repair itself is presented. A possible way to deploy a SHCA in a system of SRAM-based embedded program memories (ePM) for one type of chip multi-processors is also discussed. The SHCA is compared with conventional error correction approaches and its strengths and limitations are analyzed.
|
69 |
A Quantitative Analysis of Memory Controller Page PoliciesBlackmore, Matthew 28 February 2013 (has links)
Two common goals in computing system design are increasing performance and decreasing power consumption. DRAM-based memory subsystems are a major component of both system performance and power consumption. Memory controllers employ strategies to efficiently schedule DRAM operations to reduce latency and to utilize DRAM low power modes when possible. One of the most important of these is the page policy, which determines when to close pages in DRAM. An effective DRAM memory controller page policy is important to minimizing power consumption and increasing system performance. This thesis explores the impact memory controller page policy has on performance as measured by the number of page-hits minus page-misses and estimated average memory access latency. I captured real-time DDR3 command and address memory traces for the SPEC CPU2006 benchmarks under three memory controller page policies: closed page, fixed open-page, and Intel's adaptive open-page [1]. Traces were captured using a programmable memory traffic analyzer (PMTA), a device interposed between the DIMM slot and DDR3 DIMM on the motherboard. The memory traces for each benchmark were analyzed to determine the absolute number of page-hits and page-misses that occurred. In software post-processing I simulated a theoretically perfect "oracle" page policy for each captured trace to compare the efficiency of existing policies. The SPEC CPU 2006 benchmarks under the oracle page policy for each trace exhibited an average increase in the number of page-hits minus page-misses of 280.3% and an average decrease in the average memory latency of 11.1%. Two new adaptive open-page policies are proposed and simulated using the captured memory traces. These proposed policies result in an average increase of 74.8% and 62.4% in the number of page-hits minus page-misses over Intel's adaptive open-page policy and an average decrease in the average memory latency of 3.8% and 3.4%.
|
70 |
Ranked Similarity Search of Scientific Datasets: An Information Retrieval ApproachMegler, Veronika Margaret 04 June 2014 (has links)
In the past decade, the amount of scientific data collected and generated by scientists has grown dramatically. This growth has intensified an existing problem: in large archives consisting of datasets stored in many files, formats and locations, how can scientists find data relevant to their research interests? We approach this problem in a new way: by adapting Information Retrieval techniques, developed for searching text documents, into the world of (primarily numeric) scientific data. We propose an approach that uses a blend of automated and curated methods to extract metadata from large repositories of scientific data. We then perform searches over this metadata, returning results ranked by similarity to the search criteria. We present a model of this approach, and describe a specific implementation thereof performed at an ocean-observatory data archive and now running in production. Our prototype implements scanners that extract metadata from datasets that contain different kinds of environmental observations, and a search engine with a candidate similarity measure for comparing a set of search terms to the extracted metadata. We evaluate the utility of the prototype by performing two user studies; these studies show that the approach resonates with users, and that our proposed similarity measure performs well when analyzed using standard Information Retrieval evaluation methods. We performed performance tests to explore how continued archive growth will affect our goal of interactive response, developed and applied techniques that mitigate the effects of that growth, and show that the techniques are effective. Lastly, we describe some of the research needed to extend this initial work into a true "Google for data".
|
Page generated in 0.0624 seconds