• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1
  • 1
  • 1
  • Tagged with
  • 10
  • 10
  • 8
  • 8
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Authenticating turbocharger performance utilizing ASME performance test code correction methods

Shultz, Jacque January 1900 (has links)
Master of Science / Department of Mechanical and Nuclear Engineering / Kirby S. Chapman / Continued regulatory pressure necessitates the use of precisely designed turbochargers to create the design trapped equivalence ratio within large-bore stationary engines used in the natural gas transmission industry. The upgraded turbochargers scavenge the exhaust gases from the cylinder, and create the air manifold pressure and back pressure on the engine necessary to achieve a specific trapped mass. This combination serves to achieve the emissions reduction required by regulatory agencies. Many engine owner/operators request that an upgraded turbocharger be tested and verified prior to re-installation on engine. Verification of the mechanical integrity and airflow performance prior to engine installation is necessary to prevent field hardware iterations. Confirming the as-built turbocharger design specification prior to transporting to the field can decrease downtime and installation costs. There are however, technical challenges to overcome for comparing test-cell data to field conditions. This thesis discusses the required corrections and testing methodology to verify turbocharger onsite performance from data collected in a precisely designed testing apparatus. As the litmus test of the testing system, test performance data is corrected to site conditions per the design air specification. Prior to field installation, the turbocharger is fitted with instrumentation to collect field operating data to authenticate the turbocharger testing system and correction methods. The correction method utilized herein is the ASME Performance Test Code 10 (PTC10) for Compressors and Exhausters version 1997.
2

An Automated Tool For Requirements Verification

Tekin, Yasar 01 September 2004 (has links) (PDF)
In today&amp / #8217 / s world, only those software organizations that consistently produce high quality products can succeed. This situation enforces the effective usage of defect prevention and detection techniques. One of the most effective defect detection techniques used in software development life cycle is verification of software requirements applied at the end of the requirements engineering phase. If the existing verification techniques can be automated to meet today&amp / #8217 / s work environment needs, the effectiveness of these techniques can be increased. This study focuses on the development and implementation of an automated tool that automates verification of software requirements modeled in Aris eEPC and Organizational Chart for automatically detectable defects. The application of reading techniques on a project and comparison of results of manual and automated verification techniques applied to a project are also discussed.
3

Ověření metodiky Testování webových služeb nástrojem SoapUI / Verification methodology for Web services testing with SoapUI

Jirmusová, Radka January 2016 (has links)
This study is focused on web services testing with SoapUI tool, particularly on verification methodology for Web services testing with SoapUI. The main objective of this thesis is to verify the methodology. Specific goals include introduction to basic concepts and principles related to web services, a description of the testing process including types of the tests and specifics of testing web services, introduction of methodology for Web services testing with SoapUI, practical verification of the methodology on the real information system and a suggestion of how to adapt the methodology on the basis of the verification. The theoretical part summarizes basic knowledge of the web services technology and web services testing. Especially it is devoted to description of the methodology for Web services testing with SoapUI and to the introduction of the SoapUI tool. The practical part consists of the introduction of the test system in Česká pojišťovna, a. s. and Generali pojišťovna, a. s., where the methodology is verified. Next the methodology for Web services testing with SoapUI is verified. Based on this verification, there are suggestions of how to adapt or extend the methodology.
4

ENABLING REAL TIME INSTRUMENTATION USING RESERVOIR SAMPLING AND BIN PACKING

Sai Pavan Kumar Meruga (16496823) 30 August 2023 (has links)
<p><em>Software Instrumentation is the process of collecting data during an application’s runtime,</em></p> <p><em>which will help us debug, detect errors and optimize the performance of the binary. The</em></p> <p><em>recent increase in demand for low latency and high throughput systems has introduced new</em></p> <p><em>challenges to the process of Software Instrumentation. Software Instrumentation, especially</em></p> <p><em>dynamic, has a huge impact on systems performance in scenarios where there is no early</em></p> <p><em>knowledge of data to be collected. Naive approaches collect too much or too little</em></p> <p><em>data, negatively impacting the system’s performance.</em></p> <p><em>This thesis investigates the overhead added by reservoir sampling algorithms at different</em></p> <p><em>levels of granularity in real-time instrumentation of distributed software systems. Also, this thesis describes the implementation of sampling techniques and algorithms to reduce the overhead caused by instrumentation.</em></p>
5

Parameterized Verification and Synthesis for Distributed Agreement-Based Systems

Nouraldin Jaber (13796296) 19 September 2022 (has links)
<p> </p> <p>Distributed agreement-based systems use common distributed agreement protocols such as leader election and consensus as building blocks for their target functionality—processes in these systems may need to agree on a leader, on the members of a group, on owners of locks, or on updates to replicated data. Such distributed agreement-based systems are common and potentially permit modular, scalable verification approaches that mimic their modular design. Interestingly, while there are many verification efforts that target agreement protocols themselves, little attention has been given to distributed agreement-based systems that build on top of these protocols. </p> <p>In this work, we aim to develop a fully-automated, modular, and usable parameterized verification approach for distributed agreement-based systems. To do so, we need to overcome the following challenges. First, the fully automated parameterized verification problem, i.e, the problem of algorithmically checking if the system is correct for any number of processes, is a well-known <em>undecidable </em>problem. Second, to enable modular verification that leverages the inherently-modular nature of these agreement-based systems, we need to be able to support <em>abstractions </em>of agreement protocols. Such abstractions can replace the agreement protocols’ implementations when verifying the overall system; enabling modular reasoning. Finally, even when the verification is fully automated, a system designer still needs assistance in <em>modeling </em>their distributed agreement-based systems. </p> <p>We systematically tackle these challenges through the following contributions. </p> <p>First, we support efficient, decidable verification of distributed agreement-based systems by developing a computational model—the GSP model—for reasoning about distributed (agreement-based) systems that admits decidability and <em>cutoff </em>results. Cutoff results enable practical verification by reducing the parameterized verification problem to the verification problem of a system with a fixed, finite number of processes. The GSP model supports generalized communication primitives and global guards, both of which are essential to enable abstractions of agreement protocols. </p> <p>Then, we address the usability and modularity aspects by developing a framework, QuickSilver, tailored for modeling and modular parameterized verification of distributed agreement-based systems. QuickSilver provides an intuitive domain-specific language, called Mercury, that is equipped with two agreement primitives capable of abstracting away agreement protocols when modeling agreement-based systems; enabling modular verification. QuickSilver extends the decidability and cutoff results of the GSP model to provide fully automated, efficient parameterized verification for a large class of systems modeled in Mercury. </p> <p>Finally, we leverage synthesis techniques to further enhance the usability of our approach and propose Cinnabar, a tool that supports synthesis of distributed agreement-based systems with efficiently-decidable parameterized verification. Cinnabar allows a system de- signer to provide a sketch of their Mercury model and uses a counterexample-guided synthesis procedure to search for model completions that both belong to the efficiently-decidable fragment of Mercury and are correct. </p> <p>We evaluate our contributions on various interesting distributed agreement-based systems adapted from real-world applications, such as a data store, a lock service, a surveillance system, a pathfinding algorithm for mobile robots, and more. </p>
6

Revamping Binary Analysis with Sampling and Probabilistic Inference

Zhuo Zhang (16398420) 19 June 2023 (has links)
<p>Binary analysis, a cornerstone technique in cybersecurity, enables the examination of binary executables, irrespective of source code availability.</p> <p>It plays a critical role in understanding program behaviors, detecting software bugs, and mitigating potential vulnerabilities, specially in situations where the source code remains out of reach.</p> <p>However, aligning the efficacy of binary analysis with that of source-level analysis remains a significant challenge, primarily due to the uncertainty caused by the loss of semantic information during the compilation process.</p> <p><br></p> <p>This dissertation presents an innovative probabilistic approach, termed as <em>probabilistic binary analysis</em>, designed to combat the intrinsic uncertainty in binary analysis.</p> <p>It builds on the fundamental principles of program sampling and probabilistic inference, enhanced further by an iterative refinement architecture.</p> <p>The dissertation suggests that a thorough and practical method of sampling program behaviors can yield a substantial quantity of hints which could be instrumental in recovering lost information, despite the potential inclusion of some inaccuracies.</p> <p>Consequently, a probabilistic inference technique is applied to systematically incorporate and process the collected hints, suppressing the incorrect ones, thereby enabling the interpretation of high-level semantics.</p> <p>Furthermore, an iterative refinement mechanism is deployed to augment the efficiency of the probabilistic analysis in subsequent applications, facilitating the progressive enhancement of analysis outcomes through an automated or human-guided feedback loop.</p> <p><br></p> <p>This work offers an in-depth understanding of the challenges and solutions related to assessing low-level program representations and systematically handling the inherent uncertainty in binary analysis. </p> <p>It aims to contribute to the field by advancing the development of precise, reliable, and interpretable binary analysis solutions, thereby setting the groundwork for future exploration in this domain.</p>
7

SUNNYMILKFUZZER - AN OPTIMIZED FUZZER FOR JVM-BASED LANGUAGE

Junyang Shao (16649343) 27 July 2023 (has links)
<p>This thesis presents an in-depth investigation into the opportunities of optimizing the performance (throughput) of fuzzing on Java Virtual Machine (JVM)-based languages. The study identifies five main areas for potential optimization, each of which contributes to the performance bottlenecks in the existing state-of-the-art Java fuzzer, Jazzer.</p> <p><br></p> <p>Firstly, the use of coverage probes is recognized as costly due to the native method call, including call frame generation and destruction, while it only performs a simple byte increment. Secondly, the probes may become exhausted, which subsequently cease to generate signals for new interesting inputs, while the associated costs persist. Thirdly, the scanning of the coverage map is expensive, particularly for targets with a large loaded bytecode. Given that test inputs can only execute a portion of these, the probes for most bytecodes are scanned repeatedly without generating any signals, indicating a need for a more structured coverage map design to skip the code probes effectively. Lastly, exception handling in JVM is costly as it automatically fills in the stack trace whenever an exception object is created, even when most targets don't utilize this information. </p> <p><br></p> <p>The study then designs and implements optimization techniques for these opportunities. We believe we provide the optimal solution for the first opportunity, while better optimizations could be proposed for the second, third, and fourth. The collective improvement brought about by these implementations is on average 138% and up to 441% in throughput. This work, thus, offers valuable insights into enhancing the efficiency of fuzz testing in JVM languages and paves the way for further research in optimizing other areas of JVM-based-language fuzzing performance.</p>
8

Composable, Sound Transformations for Nested Recursion and Loops

Kirshanthan Sundararajah (16647885) 26 July 2023 (has links)
<p>    </p> <p>Programs that use loops to operate over arrays and matrices are generally known as <em>regular programs</em>. These programs appear in critical applications such as image processing, differential equation solvers, and machine learning. Over the past few decades, extensive research has been done on composing, verifying, and applying scheduling transformations like loop interchange and loop tiling for regular programs. As a result, we have general frameworks such as the polyhedral model to handle transformations for loop-based programs. Similarly, programs that use recursion and loops to manipulate pointer-based data structures are known as <em>irregular programs</em>. Irregular programs also appear in essential applications such as scientific simulations, data mining, and graphics rendering. However, there is no analogous framework for recursive programs. In the last decade, although many scheduling transformations have been developed for irregular programs, they are ad-hoc in various aspects, such as being developed for a specific application and lacking portability. This dissertation examines principled ways to handle scheduling transformations for recursive programs through a unified framework resulting in performance enhancement. </p> <p>Finding principled approaches to optimize irregular programs at compile-time is a long-standing problem. We specifically focus on scheduling transformations that reorder a program’s operations to improve performance by enhancing locality and exploiting parallelism. In the first part of this dissertation, we present PolyRec, a unified general framework that can compose and apply scheduling transformations to nested recursive programs and reason about the correctness of composed transformations. PolyRec is a first-of-its-kind unified general transformation framework for irregular programs consisting of nested recursion and loops. It is built on solid theoretical foundations from the world of automata and transducers and provides a fundamentally novel way to think about recursive programs and scheduling transformations for them. The core idea is designing mechanisms to strike a balance between the expressivity in representing the set of dynamic instances of computations, transformations, and dependences and the decidability of checking the correctness of composed transformations. We use <em>multi-tape </em>automata and transducers to represent the set of dynamic instances of computations and transformations, respectively. These machines are similar yet more expressive than their classical single-tape counterparts. While in general decidable properties of classical machines are undecidable for multi-tape machines, we have proven that those properties are decidable for the class of machines we consider, and we present algorithms to verify these properties. Therefore these machines provide the building blocks to compose and verify scheduling transformations for nested recursion and loops. The crux of the PolyRec framework is its regular string-based representation of dynamic instances that allows to lexicographically order instances identically to their execution order. All the transformations considered in PolyRec require different ordering of these strings representable only with <em>additive </em>changes to the strings. </p> <p>Loop transformations such as <em>skewing </em>require performing arithmetic on the representation of dynamic instances. In the second part of this dissertation, we explore this space of transformations by introducing skewing to nested recursion. Skewing plays an essential role in producing easily parallelizable loop nests from seemingly difficult ones due to dependences carried across loops. The inclusion of skewing for nested recursion to PolyRec requires significant extensions to representing dynamic instances and transformations that facilitate <em>performing arithmetic using strings</em>. First, we prove that the machines that represent the transformations are still composable. Then we prove that the representation of dependences and the algorithm that checks the correctness of composed transformations hold with minimal changes. Our new extended framework is known as UniRec, since it resembles the unimodular transformations for perfectly nested loop nests, which consider any combination of the primary transformations interchange, reversal, and skewing. UniRec opens possibilities of producing newly composed transformations for nested recursion and loops and verifying their correctness. We claim that UniRec completely subsumes the unimodular framework for loop transformations since nested recursion is more general than loop nests. </p>
9

TAMING IRREGULAR CONTROL-FLOW WITH TARGETED COMPILER TRANSFORMATIONS

Charitha Saumya Gusthinna Waduge (15460634) 15 May 2023 (has links)
<p>    </p> <p>Irregular control-flow structures like deeply nested conditional branches are common in real-world software applications. Improving the performance and efficiency of such programs is often challenging because it is difficult to analyze and optimize programs with irregular control flow. We observe that real-world programs contain similar or identical computations within different code paths of the conditional branches. Compilers can merge similar code to improve performance or code size. However, existing compiler optimizations like code hoisting/sinking, and tail merging do not fully exploit this opportunity. We propose a new technique called Control-Flow Melding (CFM) that can merge similar code sequences at the control-flow region level. We evaluate CFM in two applications. First, we show that CFM reduces the control divergence in GPU programs and improves the performance. Second, we apply CFM to CPU programs and show its effectiveness in reducing code size without sacrificing performance. In the next part of this dissertation, we investigate how CFM can be extended to improve dynamic test generation techniques like Dynamic Symbolic Execution (DSE). DSE suffers from path explosion problem when many conditional branches are present in the program. We propose a non-semantics-preserving branch elimination transformation called CFM-SE that reduces the number of symbolic branches in a program. We also provide a framework for detecting and reasoning about false positive bugs that might be added to the program by non-semantics-preserving transformations like CFM-SE. Furthermore, we evaluate CFM-SE on real-world applications and show its effectiveness in improving DSE performance and code coverage. </p>
10

Nonpoint Source Pollutant Modeling in Small Agricultural Watersheds with the Water Erosion Prediction Project

Ryan McGehee (14054223) 04 November 2022 (has links)
<p>Current watershed-scale, nonpoint source (NPS) pollution models do not represent the processes and impacts of agricultural best management practices (BMP) on water quality with sufficient detail. To begin addressing this gap, a novel process-based, watershed-scale, water quality model (WEPP-WQ) was developed based on the Water Erosion Prediction Project (WEPP) and the Soil and Water Assessment Tool (SWAT) models. The proposed model was validated at both hillslope and watershed scales for runoff, sediment, and both soluble and particulate forms of nitrogen and phosphorus. WEPP-WQ is now one of only two models which simulates BMP impacts on water quality in ‘high’ detail, and it is the only one not based on USLE sediment predictions. Model validations indicated that particulate nutrient predictions were better than soluble nutrient predictions for both nitrogen and phosphorus. Predictions of uniform conditions outperformed nonuniform conditions, and calibrated model simulations performed better than uncalibrated model simulations. Applications of these kinds of models in real-world, historical simulations are often limited by a lack of field-scale agricultural management inputs. Therefore, a prototype tool was developed to derive management inputs for hydrologic models from remotely sensed imagery at field-scale resolution. At present, only predictions of crop, cover crop, and tillage practice inference are supported and were validated at annual and average annual time intervals based on data availability for the various management endpoints. Extraction model training and validation were substantially limited by relatively small field areas in the observed management dataset. Both of these efforts contribute to computational modeling research and applications pertaining to agricultural systems and their impacts on the environment.</p>

Page generated in 0.1645 seconds