• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 16
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 25
  • 17
  • 10
  • 9
  • 7
  • 7
  • 6
  • 6
  • 6
  • 6
  • 4
  • 4
  • 3
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Tennis ball degradation

Steele, Carolyn January 2006 (has links)
Despite anecdotal evidence of changes to tennis ball characteristics and play properties, little research has been directed towards understanding the causes and effects of tennis ball degradation. Improved racket technology and player fitness have contributed to an increase in the speed of the game, yet balls have seen few advancements over the same period. There are several obvious factors contributing to tennis ball degradation: natural pressure loss in pressurised balls, changes to the cloth covering due to court and racket impacts, and precipitation and environmental factors. As recent tennis research has focused on the new balls, there is a need to investigate other ball conditions present in the game of tennis. This thesis provides a structured investigation into the causes and effects of ball degradation, an objective assessment of the effects of degradation on ball performance, and incorporates subjective perceptions of all aesthetics and play properties noted by players. Particular attention is given to ball fuzziness. Excessive fuzziness can occur from manufacturing variability, court and racket interactions, and environmental conditions - though there is currently no standardised method to assess ball surface condition. An objective measure of ball fuzziness has been developed and used in the analysis of nearly 4000 individual ball images. The effects of court and racket impacts, precipitation, natural pressure loss, and repeated impacts have been analysed for their effects on ball degradation. An assessment of ball performance utilised ball impact and aerodynamic data to determine significant differences between balls and develop an improved ball trajectory model.
2

Phase Locked Loop and Modulo Games, The Dogma Loops; and “Finding Ibrida”

Stewart, Kenneth David January 2016 (has links)
<p>This dissertation consists of two independent musical compositions and an article detailing the process of the design and assembly of an electric guitar with particular emphasis on the carefully curated suite of embedded effects. </p><p>The first piece, 'Phase Locked Loop and Modulo Games' is scored for electric guitar and a single echo of equal volume less than a beat away. One could think of the piece as a 15 minute canon at the unison at the dotted eighth note (or at times the quarter or triplet-quarter), however the compositional motivation is more about weaving a composite texture between the guitar and its echo that is, while in theory extremely contrapuntal, in actuality is simply a single [superhuman] melodic line.</p><p>The second piece, 'The Dogma Loops' picks up a few compositional threads left by ‘Phase Locked Loop’ and weaves them into an entirely new tapestry. 'Phase Locked Loop' is motivated by the creation of a complex musical composite that is for the most part electronically transparent. 'The Dogma Loops' questions that same notion of composite electronic complexity by essentially asking a question: "what are the inputs to an interactive electronic system that create the most complex outputs via the simplest musical means possible?"</p><p>'The Dogma Loops' is scored for Electric Guitar (doubling on Ukulele), Violin and Violoncello. All of the principal instruments require an electronic pickup (except the Uke). The work is in three sections played attacca; [Automation Games], [Point of Origin] and [Cloning Vectors]. </p><p>The third and final component of the document is the article 'Finding Ibrida.' This article details the process of the design and assembly of an electric guitar with integrated effects, while also providing the deeper context (conceptual and technical) which motivated the efforts and informed the challenges to hybridize the various technologies (tubes, transistors, digital effects and a microcontroller subsystem). The project was motivated by a desire for rigorous technical and hands-on engagement with analog signal processing as applied to the electric guitar. ‘Finding Ibrida’ explores sound, some myths and lore of guitar tech and the history of electric guitar distortion and its culture of sonic exploration.</p> / Dissertation
3

Introducing probabilities within grey-box fuzzing / Hänsynstagande till sannolikheter inom grey-box fuzzing

Sletmo, Patrik January 2019 (has links)
Over the recent years, the software industry has faced a steady increase in the number of exposed and exploited software vulnerabilities. With more software and devices being connected to the internet every day, the need for proactive security measures has never been more important. One promising new technology for making software more secure is fuzz testing. This automated testing technique is based around generating a large number of test cases with the intention of revealing dangerous bugs and vulnerabilities. In this thesis work, a new direction within grey-box fuzz testing is evaluated against previous work. The presented approach uses sampled probability data in order to guide the fuzz testing towards program states that are expected to be easy to reach and beneficial for the discovery of software vulnerabilities. Evaluation of the design shows that the suggested approach provides no obvious advantage over existing solutions, but also indicates that the performance advantage could be dependent on the structure of the system under test. However, analysis of the design itself highlights several design decisions that could benefit from more extensive research. While the design proposed in this thesis work is insufficient for replacing current state of the art fuzz testing software, it provides a solid foundation for future research within the field. With the many insights gained from the design and implementation work, this thesis work aims to both inspire others and showcase the challenges of creating a probability-based approach to grey-box fuzz testing.
4

Torpedo: A Fuzzing Framework for Discovering Adversarial Container Workloads

McDonough, Kenton Robert 13 July 2021 (has links)
Over the last decade, container technology has fundamentally changed the landscape of commercial cloud computing services. In contrast to traditional VM technologies, containers theoretically provide the same process isolation guarantees with less overhead and additionally introduce finer grained options for resource allocation. Cloud providers have widely adopted container based architectures as the standard for multi-tenant hosting services and rely on underlying security guarantees to ensure that adversarial workloads cannot disrupt the activities of coresident containers on a given host. Unfortunately, recent work has shown that the isolation guarantees provided by containers are not absolute. Due to inconsistencies in the way cgroups have been added to the Linux kernel, there exist vulnerabilities that allow containerized processes to generate "out of band" workloads and negatively impact the performance of the entire host without being appropriately charged. Because of the relative complexity of the kernel, discovering these vulnerabilities through traditional static analysis tools may be very challenging. In this work, we present TORPEDO, a set of modifications to the SYZKALLER fuzzing framework that creates containerized workloads and searches for sequences of system calls that break process isolation boundaries. TORPEDO combines traditional code coverage feedback with resource utilization measurements to motivate the generation of "adversarial" programs based on user-defined criteria. Experiments conducted on the default docker runtime runC as well as the virtualized runtime gVisor independently reconfirm several known vulnerabilities and discover interesting new results and bugs, giving us a promising framework to conduct more research. / Master of Science / Over the last decade, container technology has fundamentally changed the landscape of commercial cloud computing services. By abstracting away many of the system details required to deploy software, developers can rapidly prototype, deploy, and take advantage of massive distributed frameworks when deploying new software products. These paradigms are supported with corresponding business models offered by cloud providers, who allocate space on powerful physical hardware among many potentially competing services. Unfortunately, recent work has shown that the isolation guarantees provided by containers are not absolute. Due to inconsistencies in the way containers have been implemented by the Linux kernel, there exist vulnerabilities that allow containerized programs to generate "out of band" workloads and negatively impact the performance of other containers. In general, these vulnerabilities are difficult to identify, but can be very severe. In this work, we present TORPEDO, a set of modifications to the SYZKALLER fuzzing framework that creates containerized workloads and searches for programs that negatively impact other containers. TORPEDO uses a novel technique that combines resource monitoring with code coverage approximations, and initial testing on common container software has revealed new interesting vulnerabilities and bugs.
5

The Hare, the Tortoise and the Fox : Extending Anti-Fuzzing

Dewitz, Anton, Olofsson, William January 2022 (has links)
Background. The goal of our master's thesis is to reduce the effectiveness of fuzzers using coverage accounting. The method we chose to carry out our goal is based on how the coverage accounting in TortoiseFuzz rates code paths to find memory corruption bugs. It simply looks for functions that tend to cause vulnerabilities and considers more to be better. Our approach is to insert extra function calls to these memory functions inside fake code paths generated by anti-fuzzing. Objectives. Our thesis researches the current anti-fuzzing techniques to figure out which tool to extend with our counter to coverage accounting. We conduct an experiment where we run several fuzzers on different benchmark programs to evaluate our tool. Methods. The foundation for the anti-fuzzing tool will be obtained by conducting a literature review, to evaluate current anti-fuzzing techniques, and how coverage accounting prioritizes code paths. Afterward, an experiment will be conducted to evaluate the created tool. To evaluate fuzzers the FuzzBench platform will be used, a homogeneous test environment that allows future research to easier compare to old research using a standard platform. Benchmarks representative of real-world applications will be chosen from within this platform. Each benchmark will be executed in three versions, the original, one protected by a prior anti-fuzzing tool, and one protected by our new anti-fuzzing tool. Results. This experiment showed that our anti-fuzzing tool successfully lowered the number of unique found bugs by TortoiseFuzz, even when the benchmark is protected by a prior developed anti-fuzzing tool. Conclusions. We can conclude, based on our results, that our tool shows promise against a fuzzer using coverage accounting. Further study will push fuzzers to become even better to overcome new anti-fuzzing methods. / Bakgrund. Målet med vår masteruppsats är att försöka reducera effektiviteten hos fuzzers som använder sig av täckningsrapportering (coverage accounting). Metoden vi använde för att genomföra vårt mål baserades på hur täckningsrapportering i TortoiseFuzz betygsätter kodvägar för att hitta minneskorruptionsbuggar. Den letar helt enkelt efter funktioner som tenderar att orsaka sårbarheter och anser att fler är bättre. Vår idé var att föra in extra funktionsanrop till dessa minnesfunktioner inuti de fejkade kodgrenarna som blivit genererade av anti-fuzzningen. Syfte. Vår uppsats undersöker nuvarande anti-fuzzningstekniker för att evaluera vilket verktyg som vår kontring mot täckningsrapportering ska baseras på. Vi utför ett experiment där vi kör flera fuzzers på olika riktmärkesprogram för att utvärdera vårt verktyg. Metod. Den teoretiska grunden för anti-fuzzningsverktyget erhålls genom genomförandet av en litteraturstudie, med syfte att evaluera befintliga tekniker inom anti-fuzzning, och erhålla förståelse över hur täckningsrapportering prioriterar kodgrenar. Därefter kommer ett experiment att genomföras för att evaluera det framtagna verktyget. För att sedan evaluera vårt verktyg mot TortoiseFuzz kommer FuzzBench att användas, en homogen testmiljö utformad för att evaluera och jämföra fuzzers mot varandra. Den är utformad för att underlätta för vidare forskning, där reproduktion av ett experiment är enkelt, och resultat från tidigare forskning går att enkelt slå samman. Riktmärkesprogrammen som är representativa av verkliga program kommer väljas i denna plattform. Varje riktmärkesprogram kommer bli kopierad i tre versioner, originalet, ett som är skyddat av ett tidigare anti-fuzzningsverktyg, och ett skyddat av vårt nya anti-fuzzningsverktyg. Resultat. Detta experiment visade att vårt anti-fuzzningsverktyg framgångsrikt sänkte antalet unika funna buggar av TortoiseFuzz, även när riktmärkesprogrammen skyddades av ett tidigare anti-fuzzningsverktyg. Slutsatser. Vi drar slutsatsen, baserat på våra resultat, att vårt verktyg ser lovande ut mot en fuzzer som använder täckningsrapportering. Vidare studier kommer trycka på utvecklingen av fuzzers att bli ännu bättre för att överkomma nya anti-fuzzing-metoder.
6

Applications of information sharing for code generation in process virtual machines

Kyle, Stephen Christopher January 2016 (has links)
As the backbone of many computing environments today, it is important that process virtual machines be both performant and robust in mobile, personal desktop, and enterprise applications. This thesis focusses on code generation within these virtual machines, particularly addressing situations where redundant work is being performed. The goal is to exploit information sharing in order to improve the performance and robustness of virtual machines that are accelerated by native code generation. First, the thesis investigates the potential to share generated code between multiple threads in a dynamic binary translator used to perform instruction set simulation. This is done through a code generation design that allows native code to be executed by any simulated core and adding a mechanism to share native code regions between threads. This is shown to improve the average performance of multi-threaded benchmarks by 1.4x when simulating 128 cores on a quad-core host machine. Secondly, the ahead-of-time code generation system used for executing Android applications is improved through the use of profiling. The thesis investigates the potential for profiles produced by individual users of applications to be shared and merged together to produce a generic profile that still provides a lot of benefit for a new user who is then able to skip the expensive profiling phase. These profiles can not only be used for selective compilation to reduce code-size and installation time, but can also be used for focussed optimisation on vital code regions of an application in order to improve overall performance. With selective compilation applied to a set of popular Android applications, code-size can be reduced by 49.9% on average, while installation time can be reduced by 31.8%, with only an average 8.5% increase in the amount of sequential runtime required to execute the collected profiles. The thesis also shows that, among the tested users, the use of a crowd-sourced and merged profile does not significantly affect their estimated performance loss from selective compilation (0.90x-0.92x) in comparison to when they they perform selective compilation with their own unique profile (0.93x). Furthermore, by proposing a new, more powerful code generator for Android’s virtual machine, these same profiles can be used to perform focussed optimisation, which preliminary results show to increase runtime performance across a set of common Android benchmarks by 1.46x-10.83x. Finally, in such a situation where a new code generator is being added to a virtual machine, it is also important to test the code generator for correctness and robustness. The methods of execution of a virtual machine, such as interpreters and code generators, must share a set of semantics about how programs must be executed, and this can be exploited in order to improve testing. This is done through the application of domain-aware binary fuzzing and differential testing within Android’s virtual machine. The thesis highlights a series of actual code generation and verification bugs that were found in Android’s virtual machine using this testing methodology, as well as comparing the proposed approach to other state-of-the-art fuzzing techniques.
7

Hermes: A Targeted Fuzz Testing Framework

Shortt, Caleb James 12 March 2015 (has links)
The use of security assurance cases (security cases) to provide evidence-based assurance of security properties in software is a young field in Software Engineering. A security case uses evidence to argue that a particular claim is true. For example, the highest-level claim may be that a given system is sufficiently secure, and it would include sub claims to break that general claim down into more granular, and tangible, items - such as evidence or other claims. Random negative testing (fuzz testing) is used as evidence to support security cases and the assurance they provide. Many current approaches apply fuzz testing to a target system for a given amount of time due to resource constraints. This may leave entire sections of code untouched [60]. These results may be used as evidence in a security case but their quality varies based on controllable variables, such as time, and uncontrollable variables, such as the random paths chosen by the fuzz testing engine. This thesis presents Hermes, a proof-of-concept fuzz testing framework that provides improved evidence for security cases by automatically targeting problem sections in software and selectively fuzz tests them in a repeatable and timely manner. During our experiments Hermes produced results with comparable target code coverage to a full, exhaustive, fuzz test run while significantly reducing the test execution time that is associated with an exhaustive fuzz test. These results provide a targeted piece of evidence for security cases which can be audited and refined for further assurance. Hermes' design allows it to be easily attached to continuous integration frameworks where it can be executed in addition to other frameworks in a given test suite. / Graduate / 0984 / cshortt@uvic.ca
8

Random testing with sanitizers to detect concurrency bugs in embedded avionics software

Johansson, Viktor, Vallén, Alexander January 2018 (has links)
Fuzz testing is a random testing technique that is effective at finding bugs in large software programs and protocols. We investigate if the technology can be used to find bugs in multi-threaded applications by fuzzing a real-time embedded avionics platform together with a tool specialized at finding data races between multiple threads. We choose to fuzz an API (available to applications executing on top) of the platform. This thesis evaluates aspects of integrating a fuzzing program, AFL and a sanitizer, ThreadSanitizer with an embedded system. We investigate the modifications needed to create a correct run-time environment for the system, including supplying test data in a safe manner and we discuss hardware dependencies. We present a setup where we show that the tools can be used to find planted data races, however slowdown introduced by the tools is significant and the fuzzer only managed to find very simple planted data races during the test runs. Our findings also indicate what appear to be conflicts in instrumentation between the fuzzer and the sanitizer.
9

Fuzz testing for design assurance levels

Gustafsson, Marcus, Holm, Oscar January 2017 (has links)
With safety critical software, it is important that the application is safe and stable. While this software can be quality tested with manual testing, automated testing has the potential to catch errors that manual testing will not. In addition there is also the possibility to save time and cost by automating the testing process. This matters when it comes to avionics components, as much time and cost is spent testing and ensuring the software does not crash or behave faulty. This research paper will focus on exploring the usefulness of automated testing when combining it with fuzz testing. It will also focus on how to fuzzy test applications classified into DAL-classifications.
10

Functional and Security testing of a Mobile Application / Funktionell och säkerhetstestning av en mobil applikation

Sjöstrand, Johan, Westberg, Sara January 2017 (has links)
A mobile application has been developed to be used for assistance in crisis scenarios. To assure the application is dependable enough to be used in such scenarios, the application was put under test. This thesis investigates different approaches to functional testing and security testing. Five common methods of generating test cases for functional testing have been identified and four were applied on the application. The coverage achieved for each method was measured and compared. For this specific application under test, test cases from a method called decision table-testing scored the highest code coverage. 9 bugs related to functionality were identified. Fuzz testing is a simple security testing technique for efficiently finding security flaws, and was applied for security testing of our application. During the fuzz test, system security properties were breached. An unauthorized user could read and alter asset data, and it also affected the system's availability. Our overall conclusion was that with more time, creating functional tests for smaller components of the application might have been more effective in finding faults and achieving coverage.

Page generated in 0.0265 seconds