• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 13
  • 1
  • Tagged with
  • 16
  • 16
  • 16
  • 6
  • 5
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Analysis of a parallelized neural network training program implemented using MPI and RPCs

Cordova, Hector. January 2008 (has links)
Thesis (M.S.)--University of Texas at El Paso, 2008. / Title from title screen. Vita. CD-ROM. Includes bibliographical references. Also available online.
12

Methods and Tools for Practical Software Testing and Maintenance

Saieva, Anthony January 2024 (has links)
As software continues to envelop traditional industries the need for increased attention to cybersecurity is higher than ever. Software security helps protect businesses and governments from financial losses due to cyberattacks and data breaches, as well as reputational damage. In theory, securing software is relatively straightforward—it involves following certain best practices and guidelines to ensure that the software is secure. In practice, however, software security is often much more complicated. It requires a deep understanding of the underlying system and code (including potentially legacy code), as well as a comprehensive understanding of the threats and vulnerabilities that could be present. Additionally, software security also involves the implementation of strategies to protect against those threats and vulnerabilities, which may involve a combination of technologies, processes, and procedures. In fact many real cyber attacks are caused not from zero day vulnerabilities but from known issues that haven't been addressed so real software security also requires ongoing monitoring and maintenance to ensure critical systems remain secure. This thesis presents a series of novel techniques that together form an enhanced software maintenance methodology from initial bug reporting all the way through patch deployment. We begin by introducing Ad Hoc Test Generation, a novel testing technique that handles when a security vulnerability or other critical bugis not detected by the developers’ test suite, and is discovered post-deployment, developers must quickly devise a new test that reproduces the buggy behavior. Then the developers need to test whether their candidate patch indeed fixes the bug, without breaking other functionality, while racing to deploy before attackers pounce on exposed user installations. This work builds on record-replay and binary rewriting to automatically generate and run targeted tests for candidate patches significantly faster and more efficiently than traditional test suite generation techniques like symbolic execution. Our prototype of this concept is called ATTUNE. To construct patches in some instances developers maintaining software may be forced to deal directly with the binary since source code is no longer available. In these instances this work presents a transformer based model called DIRECT that provides semantics related names for variables and function names that have been lost giving developers the opportunity to work with a facsimile of the source code that would otherwise be unavailable. In the event developers need even more support deciphering the decompiled code we provide another tool called REINFOREST that allows developers to search for similar code which they can use to further understand the code in question and use as a reference when developing a patch. After patches have been written, deployment remains a challenge. In some instances deploying a patch for the buggy behavior may require supporting legacy systems where software cannot be upgraded without causing compatibility issues. To support these updates this work introduces the concept of binary patch decomposition which breaks a software release down into its component parts and allows software administrators to apply only the critical portions without breaking functionality. We present a novel software patching methodology that we can recreate bugs, develop patches, and deploy updates in the presence of the typical challenges that come when patching production software including deficient test suites, lack of source code, lack of documentation, compatibility issues, and the difficulties associated with patching binaries directly.
13

Distributed problem solving environments for scientific computing

DeSa, Colin Joseph 04 August 2009 (has links)
The aim of this thesis is to research the issues involved in creating distributed problem solving environments for scientific computing. As part of our evaluation, we have developed a distributed problem solving environment called DPSolve which combines a very high level language, an interactive X Windows interface and a set of powerful problem solving methods into a single environment. The interface is designed to work on any system running X Windows, whilst the computations are done on a more powerful parallel computer. We implemented the interface on a DEC3100 workstation running ULTRIX, which communicates with procedures running on a Sequent 581 with 10 processors, running DYNIX via RPC. The design decisions and implementation details of our system are discussed at length along with a detailed example of the system at work. We critically evaluate the approach we have taken and show why it can scale to a very large class of scientific problems. We conclude that this distributed environment should be a representative of future scientific problem solving environments. / Master of Science
14

Extreme scale data management in high performance computing

Lofstead, Gerald Fredrick 15 November 2010 (has links)
Extreme scale data management in high performance computing requires consideration of the end-to-end scientific workflow process. Of particular importance for runtime performance, the write-read cycle must be addressed as a complete unit. Any optimization made to enhance writing performance must consider the subsequent impact on reading performance. Only by addressing the full write-read cycle can scientific productivity be enhanced. The ADIOS middleware developed as part of this thesis provides an API nearly as simple as the standard POSIX interface, but with the flexibilty to choose what transport mechanism(s) to employ at or during runtime. The accompanying BP file format is designed for high performance parallel output with limited coordination overheads while incorporating features to accelerate subsequent use of the output for reading operations. This pair of optimizations of the output mechanism and the output format are done such that they either do not negatively impact or greatly improve subsequent reading performance when compared to popular self-describing file formats. This end-to-end advantage of the ADIOS architecture is further enhanced through techniques to better enable asychronous data transports affording the incorporation of 'in flight' data processing operations and pseudo-transport mechanisms that can trigger workflows or other operations.
15

A Sparse Learning Approach for Linux Kernel Data Race Prediction

Ryan, Gabriel January 2023 (has links)
Operating system kernels rely on fine-grained concurrency to achieve optimal performance on modern multi-core processors. However, heavy usage of fine-grained concurrency mechanisms make modern operating system kernels prone to data races, which can cause severe and often elusive bugs. In this thesis, I propose a new approach to identifying data races in OS Kernels based on learning a model to predict which memory accesses can be feasibly executed concurrently with one another. To develop an efficient learning method for memory access feasibility, I develop a novel approach based on encoding feasibility as a boolean indicator function of system calls and ordered memory accesses. A memory access feasibility function encoded this way will have a naturally sparse latent representation due to the sparsity of interthread communications and synchronization interactions, and can therefore be accurately approximated based on a small number of observed concurrent execution traces. This thesis introduces two key contributions. First, Probabilistic Lockset Analysis (PLA), is a new analysis that exploits sparsity in input dependencies in conjunction with a conservative lockset analysis to efficiently predict data races in the Linux OS Kernel. Second, approximate happens-before analysis in the fourier domain (HBFourier) generalizes the approach used by PLA to reason about interthread memory communications and synchronization events through sparse fourier learning. In addition to being theoretically grounded, these techniques are highly practical: they find hundreds of races in a recent Linux development kernel, an order of magnitude improvement over prior work, and find races with severe security impacts that have been overlooked by existing kernel testing systems for years.
16

Hardware assisted memory checkpointing and applications in debugging and reliability

Doudalis, Ioannis 25 July 2011 (has links)
The problems of software debugging and system reliability/availability are among the most challenging problems the computing industry is facing today, with direct impact on the development and operating costs of computing systems. A promising debugging technique that assists programmers identify and fix the causes of software bugs a lot more efficiently is bidirectional debugging, which enables the user to execute the program in "reverse", and a typical method used to recover a system after a fault is backwards error recovery, which restores the system to the last error-free state. Both reverse execution and backwards error recovery are enabled by creating memory checkpoints, which are used to restore the program/system to a prior point in time and re-execute until the point of interest. The checkpointing frequency is the primary factor that affects both the latency of reverse execution and the recovery time of the system; more frequent checkpoints reduce the necessary re-execution time. Frequent creation of checkpoints poses performance challenges, because of the increased number of memory reads and writes necessary for copying the modified system/program memory, and also because of software interventions, additional synchronization and I/O, etc., needed for creating a checkpoint. In this thesis I examine a number of different hardware accelerators, whose role is to create frequent memory checkpoints in the background, at minimal performance overheads. For the purpose of reverse execution, I propose the HARE and Euripus hardware checkpoint accelerators. HARE and Euripus create different types of checkpoints, and employ different methods for keeping track of the modified memory. As a result, HARE and Euripus have different hardware costs and provide different functionality which directly affects the latency of reverse execution. For improving the availability of the system, I propose the Kyma hardware accelerator. Kyma enables simultaneous creation of checkpoints at different frequencies, which allows the system to recover from multiple types of errors and tolerate variable error-detection latencies. The Kyma and Euripus hardware engines have similar architectures, but the functionality of the Kyma engine is optimized for further reducing the performance overheads and improving the reliability of the system. The functionality of the Kyma and Euripus engines can be combined into a unified accelerator that can serve the needs of both bidirectional debugging and system recovery.

Page generated in 0.1324 seconds