• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 26
  • 5
  • 5
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 274
  • 274
  • 220
  • 86
  • 56
  • 51
  • 46
  • 43
  • 40
  • 37
  • 36
  • 35
  • 35
  • 33
  • 30
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
121

Systems reliability using the flow graph

Farrier, Kenneth Edward 01 January 1970 (has links)
The problem of calculating the reliability of a complex system of interacting elements is delineated to a linear system, no element of the system having a reliability distribution in terms of any other e1ement of the system, where only one path is taken through the system at a time. A precise definition is then developed to specify the reliability of the linear, single path at a time, system. A precise and concise generating function is found that effortlessly produces the reliability of the linear, single path at a time, system directly from the reliability flew graph of the system.
122

Scalable event tracking on high-end parallel systems

Mohror, Kathryn Marie 01 January 2010 (has links)
Accurate performance analysis of high end systems requires event-based traces to correctly identify the root cause of a number of the complex performance problems that arise on these highly parallel systems. These high-end architectures contain tens to hundreds of thousands of processors, pushing application scalability challenges to new heights. Unfortunately, the collection of event-based data presents scalability challenges itself: the large volume of collected data increases tool overhead, and results in data files that are difficult to store and analyze. Our solution to these problems is a new measurement technique called trace profiling that collects the information needed to diagnose performance problems that traditionally require traces, but at a greatly reduced data volume. The trace profiling technique reduces the amount of data measured and stored by capitalizing on the repeated behavior of programs, and on the similarity of the behavior and performance of parallel processes in an application run. Trace profiling is a hybrid between profiling and tracing, collecting summary information about the event patterns in an application run. Because the data has already been classified into behavior categories, we can present reduced, partially analyzed performance data to the user, highlighting the performance behaviors that comprised most of the execution time.
123

A Data-Descriptive Feedback Framework for Data Stream Management Systems

Fernández Moctezuma, Rafael J. 01 January 2012 (has links)
Data Stream Management Systems (DSMSs) provide support for continuous query evaluation over data streams. Data streams provide processing challenges due to their unbounded nature and varying characteristics, such as rate and density fluctuations. DSMSs need to adapt stream processing to these changes within certain constraints, such as available computational resources and minimum latency requirements in producing results. The proposed research develops an inter-operator feedback framework, where opportunities for run-time adaptation of stream processing are expressed in terms of descriptions of substreams and actions applicable to the substreams, called feedback punctuations. Both the discovery of adaptation opportunities and the exploitation of these opportunities are performed in the query operators. DSMSs are also concerned with state management, in particular, state derived from tuple processing. The proposed research also introduces the Contracts Framework, which provides execution guarantees about state purging in continuous query evaluation for systems with and without inter-operator feedback. This research provides both theoretical and design contributions. The research also includes an implementation and evaluation of the feedback techniques in the NiagaraST DSMS, and a reference implementation of the Contracts Framework.
124

Entropy reduction of English text using variable length grouping

Ast, Vincent Norman 01 July 1972 (has links)
It is known that the entropy of English text can be reduced by arranging the text into groups of two or more letters each. The higher the order of the grouping the greater is the entropy reduction. Using this principle in a computer text compressing system brings about difficulties, however, because the number of entries required in the translation table increases exponentially with group size. This experiment examined the possibility of using a translation table containing only selected entries of all group sizes with the expectation of obtaining a substantial entropy reduction with a relatively small table. An expression was derived that showed that the groups which should be included in the table are not necessarily those that occur frequently but rather occur more frequently than would be expected due to random occurrence. This was complicated by the fact that any grouping affects the frequency of occurrence of many other related groups. An algorithm was developed in which the table originally starts with the regular 26 letters of the alphabet and the space. Entries, which consist of letter groups, complete words, and word groups, are then added one by one based on the selection criterion. After each entry is added adjustments are made to account for the interaction of the groups. This algorithm was programmed on a computer and was run using a text sample of about 7000 words. The results showed that the entropy could easily be reduced down to 3 bits per letter with a table of less than 200 entries. With about 500 entries the entropy could be reduced to about 2.5 bits per letter. About 60% of the table was composed of letter groups, 42% of single words and 8% of word groups and indicated that the extra complications involved in handling word groups may not be worthwhile. A visual examination of the table showed that many entries were very much oriented to the particular sample. This may or may not be desirable depending on the intended use of the translating system.
125

SADDAS; a self-contained analog to digital data acquisition system.

Petersen, Walter Anton 01 January 1972 (has links)
SADDAS, a. Self-contained Analog to Digital Data Acquisition System, converts analog voltage inputs to formatted BCD (binary coded decimal digital magnetic tape. SADDAS consists of a 16 channel multiplexer, a 17 bit (4 digits + sign) 40 microsecond analog to digital converter, a 512 byte 8 bit core memory, a 30 IPS (inches per second) digital tape recorder at a density of 556 cpi (characters per inch), and a controller which integrates these instruments into a flexible and easy-to-use system. Sampling rates in excess of 360 samples per second may be used when converting seven channels of data, such as IRIG (Inter Range Instrumentation Group) analog magnetic tapes.
126

A Method and Tool for Finding Concurrency Bugs Involving Multiple Variables with Application to Modern Distributed Systems

Sun, Zhuo 05 November 2018 (has links)
Concurrency bugs are extremely hard to detect due to huge interleaving space. They are happening in the real world more often because of the prevalence of multi-threaded programs taking advantage of multi-core hardware, and microservice based distributed systems moving more and more applications to the cloud. As the most common non-deadlock concurrency bugs, atomicity violations are studied in many recent works, however, those methods are applicable only to single-variable atomicity violation, and don't consider the specific challenge in distributed systems that have both pessimistic and optimistic concurrency control. This dissertation presents a tool using model checking to predict atomicity violation concurrency bugs involving two shared variables or shared resources. We developed a unique method inferring correlation between shared variables in multi-threaded programs and shared resources in microservice based distributed systems, that is based on dynamic analysis and is able to detect the correlation that would be missed by static analysis. For multi-threaded programs, we use a binary instrumentation tool to capture runtime information about shared variables and synchronization events, and for microservice based distributed systems, we use a web proxy to capture HTTP based traffic about API calls and the shared resources they access including distributed locks. Based on the detected correlation and runtime trace, the tool is powerful and can explore a vast interleaving space of a multi-threaded program or a microservice based distributed system given a small set of captured test runs. It is applicable to large real-world systems and can predict atomicity violations missed by other related works for multi-threaded programs and a couple of previous unknown atomicity violation in real world open source microservice based systems. A limitation is that redundant model checking may be performed if two recorded interleaved traces yield the same partial order model.
127

Accelerated Iterative Algorithms with Asynchronous Accumulative Updates on a Heterogeneous Cluster

Gubbi Virupaksha, Sandesh 23 March 2016 (has links)
In recent years with the exponential growth in web-based applications the amount of data generated has increased tremendously. Quick and accurate analysis of this 'big data' is indispensable to make better business decisions and reduce operational cost. The challenges faced by modern day data centers to process big data are multi fold: to keep up the pace of processing with increased data volume and increased data velocity, deal with system scalability and reduce energy costs. Today's data centers employ a variety of distributed computing frameworks running on a cluster of commodity hardware which include general purpose processors to process big data. Though better performance in terms of big data processing speed has been achieved with existing distributed computing frameworks, there is still an opportunity to increase processing speed further. FPGAs, which are designed for computationally intensive tasks, are promising processing elements that can increase processing speed. In this thesis, we discuss how FPGAs can be integrated into a cluster of general purpose processors running iterative algorithms and obtain high performance. In this thesis, we designed a heterogeneous cluster comprised of FPGAs and CPUs and ran various benchmarks such as PageRank, Katz and Connected Components to measure the performance of the cluster. Performance improvement in terms of execution time was evaluated against a homogeneous cluster of general purpose processors and a homogeneous cluster of FPGAs. We built multiple four-node heterogeneous clusters with different configurations by varying the number of CPUs and FPGAs. We studied the effects of load balancing between CPUs and FPGAs. We obtained a speedup of 20X, 11.5X and 2X for PageRank, Katz and Connected Components benchmarks on a cluster cluster configuration of 2 CPU + 2 FPGA for an unbalancing ratio against a 4-node homogeneous CPU cluster. We studied the effect of input graph partitioning, and showed that when the input is a Multilevel-KL partitioned graph we obtain an improvement of 11%, 26% and 9% over randomly partitioned graph for Katz, PageRank and Connected Components benchmarks on a 2 CPU + 2 FPGA cluster.
128

A Real Time Web Based Electronic Triage, Resource Allocation and Hospital Dispatch System for Emergency Response

Inampudi, Venkata Srihari 01 January 2011 (has links) (PDF)
Disasters are characterized by large numbers of victims and required resources, overwhelming the available resources. Disaster response involves various entities like Incident Commanders, dispatch centers, emergency operations centers, area command and hospitals. An effective emergency response system should facilitate coordination between these various entities. Victim triage, emergency resource allocation and victim dispatch to hospitals form an important part of an emergency response system. In this present research effort, an emergency response system with the aforementioned components is developed. Triage is the process of prioritizing mass casualty victims based on severity of injuries. The system presented in this thesis is a low-cost victim triage system with RFID tags that aggregate all victim information within a database. It will allow first responders' movements to be tracked using GPS. A web-based real time resource allocation tool that can assist the Incident Commanders in resource allocation and transportation for multiple simultaneous incidents has been developed. This tool ensures that high priority resources at emergency sites are received in least possible time. This web-based tool also computes the patient dispatch schedule from each disaster site to each hospital. Patients are allocated to nearest hospitals with available medical facilities. This tool can also assist resource managers in emergency resource planning by computing the time taken to receive required resources from the nearest depots using Google Maps. These web-based tools complements emergency response systems by providing decision-making capabilities.
129

A Real Time Indoor Navigation and Monitoring System for Firefighters and Visually Impaired

Gandhi, Siddhesh R 01 January 2011 (has links) (PDF)
ABSTRACT A REAL TIME INDOOR NAVIGATION AND MONITORING SYSTEM FOR FIREFIGHTERS AND VISUALLY IMPAIRED MAY 2011 SIDDHESH RAJAN GANDHI M.S. E.C.E, UNIVERSITY OF MASSACHUSETTS AMHERST Directed by: Professor Aura Ganz There has been a widespread growth of technology in almost every facet of day to day life. But there are still important application areas in which technology advancements have not been implemented in a cost effective and user friendly manner. Such applications which we will address in this proposal include: 1) indoor localization and navigation of firefighters during rescue operations and 2) indoor localization and navigation for the blind and visually impaired population. Firefighting is a dangerous job to perform as there can be several unexpected hazards while rescuing victims. Since the firefighters do not have any knowledge about the internal structure of the fire ridden building, they will not be able to find the location of the EXIT door, a fact that can prove to be fatal. We introduce an indoor location tracking and navigation system (FIREGUIDE) using RFID technology integrated with augmented reality. FIREGUIDE assists the firefighters to find the nearest exit location by providing the navigation instructions to the exits as well as an Augmented Reality view of the location and direction of the exits. The system also presents the Incident Commander the current firefighter’s location superimposed on a map of the building floor. We envision that the FIREGUIDE system will save a significant number of firefighters and victims’ lives. Blind or visually impaired people find it difficult to navigate independently in both outdoor and indoor environments. The outdoor navigation problem can be solved by using systems that have GPS support. But indoor navigation systems for the blind or visually impaired are still a challenge to conquer, given the requirements of low cost and user friendly operation. In order to enhance the perception of indoor and unfamiliar environments for the blind and visually-impaired, as well as to aid in their navigation through such environments, we propose a novel approach that provides context–aware navigation services. INSIGHT uses RFID (Radio Frequency Identification), and tagged spaces (audio landmarks), enabling a ubiquitous computing system with contextual awareness of its users while providing them persistent and context-aware information. We present INSIGHT system that supports a number of unique features such as: a) Low deployment and maintenance cost; b) Scalability, i.e. we can deploy the system in very large buildings; c) An on-demand system that does not overwhelm the user, as it offers small amounts of information on demand; and d) Portability and ease-of-use, i.e., the custom handheld device carried by the user is compact and instructions are received audibly.
130

Categorization of Security Design Patterns

Dangler, Jeremiah Y 01 May 2013 (has links) (PDF)
Strategies for software development often slight security-related considerations, due to the difficulty of developing realizable requirements, identifying and applying appropriate techniques, and teaching secure design. This work describes a three-part strategy for addressing these concerns. Part 1 provides detailed questions, derived from a two-level characterization of system security based on work by Chung et. al., to elicit precise requirements. Part 2 uses a novel framework for relating this characterization to previously published strategies, or patterns, for secure software development. Included case studies suggest the framework's effectiveness, involving the application of three patterns for secure design (Limited View, Role-Based Access Control, Secure State Machine) to a production system for document management. Part 3 presents teaching modules to introduce patterns into lower-division computer science courses. Five modules, integer over ow, input validation, HTTPS, les access, and SQL injection, are proposed for conveying an aware of security patterns and their value in software development.

Page generated in 0.0606 seconds