• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 4
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 165
  • 165
  • 56
  • 31
  • 28
  • 23
  • 19
  • 19
  • 18
  • 18
  • 17
  • 16
  • 15
  • 14
  • 14
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
81

Cloneless: Code Clone Detection via Program Dependence Graphs with Relaxed Constraints

Simko, Thomas J 01 June 2019 (has links) (PDF)
Code clones are pieces of code that have the same functionality. While some clones may structurally match one another, others may look drastically different. The inclusion of code clones clutters a code base, leading to increased costs through maintenance. Duplicate code is introduced through a variety of means, such as copy-pasting, code generated by tools, or developers unintentionally writing similar pieces of code. While manual clone identification may be more accurate than automated detection, it is infeasible due to the extensive size of many code bases. Software code clone detection methods have differing degree of success based on the analysis performed. This thesis outlines a method of detecting clones using a program dependence graph and subgraph isomorphism to identify similar subgraphs, ultimately illuminating clones. The project imposes few constraints when comparing code segments to potentially reveal more clones.
82

N-SLOPE: A One-Class Classification Ensemble for Nuclear Forensics

Kehl, Justin 01 June 2018 (has links) (PDF)
One-class classification is a specialized form of classification from the field of machine learning. Traditional classification attempts to assign unknowns to known classes, but cannot handle novel unknowns that do not belong to any of the known classes. One-class classification seeks to identify these outliers, while still correctly assigning unknowns to classes appropriately. One-class classification is applied here to the field of nuclear forensics, which is the study and analysis of nuclear material for the purpose of nuclear incident investigations. Nuclear forensics data poses an interesting challenge because false positive identification can prove costly and data is often small, high-dimensional, and sparse, which is problematic for most machine learning approaches. A web application is built using the R programming language and the shiny framework that incorporates N-SLOPE: a machine learning ensemble. N-SLOPE combines five existing one-class classifiers with a novel one-class classifier introduced here and uses ensemble learning techniques to combine output. N-SLOPE is validated on three distinct data sets: Iris, Obsidian, and Galaxy Serpent 3, which is an enhanced version of a recent international nuclear forensics exercise. N-SLOPE achieves high classification accuracy on each data set of 100%, 83.33%, and 83.33%, respectively, while minimizing false positive detection rate to 0% across the board and correctly detecting every single novel unknown from each data set. N-SLOPE is shown to be a useful and powerful tool to aid in nuclear forensic investigations.
83

Point Based Approximate Color Bleeding with Cuda

Feeney, Nicholas D 01 June 2013 (has links) (PDF)
Simulating light is a very computationally expensive proposition. There are a wide variety of global illumination algorithms that are implemented and used by major motion picture companies to render interesting and believable scenes. Every algorithm strives to find a balance between speed and accuracy. The Point Based Approximate Color Bleeding algorithm is one of the most widely used algorithms in the field today. The Point Based Approximate Color Bleeding(PBACB) global illumination algorithm is based on the central idea that the geometry and direct illumination of the scene can be approximated by using a point cloud representation. This point cloud representation can then be used to generate the indirect illumination. The most basic unit of the point cloud is a surfel. A surfel is a two dimensional circle in space that contains the direct illumination for that section of space. The surfels are gathered in a tree structure and approximations are generated for the different levels of the tree. This tree is then used to calculate the appropriate color bleeding effect to apply to the surfaces in a rendered image. The main goal of this project was to explore the possibility of applying CUDA to the PBACB global illumination algorithm. CUDA is an extension of the C/C++ programing languages which allows for GPU parallel programming. In this paper, we present our GPU based implementation of the PBACB algorithm. The PBACB algorithm involves three central steps, creation of a surfel point cloud, generation of the spherical harmonics approximations for the point cloud, and using the surfel point cloud to generate an approximation for global illumi- nation. For this project, CUDA was applied to two of the steps of the PBACB algorithm, the generation of the spherical harmonic representations and the ap- plication of the surfel point cloud to generate indirect illumination. Our final GPU algorithm was able to obtain a 4.0 times speedup over our CPU version. We also discuss future work which could include the use of CUDA’s Dynamic Parallelism and a stack free implementation which could increase the speedups seen by our algorithm.
84

The Development and Validation of SINATRA: A Three-Dimensional Direct Simulation Monte Carlo (DSMC) Code Written in Object-Oriented C++ and Performed on Cartesian Grids

Galvez, David Matthew 01 August 2018 (has links) (PDF)
The field of Computational Fluid Dynamics (CFD) primarily involves the approximation of the Navier-Stokes equations. However, these equations are only valid when the flow is considered continuous such that molecular interactions are abundant and predictable. The Knudsen number, $Kn$, which is defined as the ratio of the flow's mean free path, $\lambda$, to some characteristic length, $L$, quantifies the continuity of any flow, and when this parameter is large enough, alternative methods must be employed to simulate gases. The Direct Simulation Monte Carlo (DSMC) method is one which simulates rarefied gas flows by directly simulating the particles that compose the flow and using probabilistic methods to determine their collisions and properties. This thesis discusses the development of a new DSMC simulation code, named SINATRA, which was written in object-oriented C++ and validated on Cartesian grids. The code demonstrates the ability to perform standard simulation code tasks which include reading-in a user-made input file, performing the specified simulation, and generating visualization files compatible with Tecplot 360\texttrademark, a commercial post-processing software. SINATRA strategically uses an octree data structure as a storage scheme for computational grid data and uses this a backbone for particle interactions. The discussed validation cases include comparisons of initial particle properties to theoretical data, convergence studies for the sampling of macroscopic properties, and validation of transport properties through natural diffusion and Couette flow simulations. The results show successful implementation of simple DSMC procedures, and a path for future development of the code is thoroughly discussed.
85

Machine Learning Approaches to Historic Music Restoration

Coleman, Quinn 01 March 2021 (has links) (PDF)
In 1889, a representative of Thomas Edison recorded Johannes Brahms playing a piano arrangement of his piece titled “Hungarian Dance No. 1”. This recording acts as a window into how musical masters played in the 19th century. Yet, due to years of damage on the original recording medium of a wax cylinder, it was un-listenable by the time it was digitized into WAV format. This thesis presents machine learning approaches to an audio restoration system for historic music, which aims to convert this poor-quality Brahms piano recording into a higher quality one. Digital signal processing is paired with two machine learning approaches: non-negative matrix factorization and deep neural networks. Our results show the advantages and disadvantages of our approaches, when we compare them to a benchmark restoration of the same recording made by the Center for Computer Research in Music and Acoustics at Stanford University. They also show how this system provides the restoration potential for a wide range of historic music artifacts like this recording, requiring minimal overhead made possible by machine learning. Finally, we go into possible future improvements to these approaches.
86

DependencyVis: Helping Developers Visualize Software Dependency Information

Lui, Nathan 01 June 2021 (has links) (PDF)
The use of dependencies have been increasing in popularity over the past decade, especially as package managers such as JavaScript's npm has made getting these packages a simple command to run. However, while incidents such as the left-pad incident has increased awareness of how vulnerable relying on these packages are, there is still some work to be done when it comes to getting developers to take the extra research step to determine if a package is up to standards. Finding metrics of different packages and comparing them is always a difficult and time consuming task, especially since potential vulnerabilities are not the only metric to consider. For example, considering how popular and how actively maintained the package is also just as important. Therefore, we propose a visualization tool called DependencyVis that is specific to JavaScript projects and npm packages as a solution by analyzing a project's dependencies in order to help developers by looking up the many basic metrics that can address a dependency's popularity, activeness, and vulnerabilities such as the number of GitHub stars, forks, and issues as well as security advisory information from npm audit. This thesis then proposes many use cases for DependencyVis to help users compare dependencies by displaying the dependencies in a graph with metrics represented by aspects such as node color or node size.
87

Reducing Vale's Memory Management Overhead Through Static Analysis

Watkins, Theodore C 01 June 2021 (has links) (PDF)
Vale is a multi-purpose programming language that focuses on guaranteeing memory safety with minimal effect on performance. To accomplish this, Vale utilizes a memory management system called Hybrid Generational Memory (HGM). HGM uses generational references to track the state of objects in memory, and static analysis to reduce memory management overhead at runtime. This thesis describes the program that performs static analysis on Vale source code during compilation, and analyzes its effect on the performance of Vale programs.
88

Artist-Driven Fracturing of Polyhedral Surface Meshes

Casella, Tyler 01 December 2013 (has links) (PDF)
This paper presents a robust and artist driven method for fracturing a surface polyhedral mesh via fracture maps. A fracture map is an undirected simple graph with nodes representing positions in UV-space and fracture lines along the surface of a mesh. Fracture maps allow artists to concisely and rapidly define, edit, and apply fracture patterns onto the surface of their mesh. The method projects a fracture map onto a polyhedral surface and splits its triangles accordingly. The polyhedral mesh is then segmented based on fracture lines to produce a set of independent surfaces called fracture components, containing the visible surface of each fractured mesh fragment. Subsequently, we utilize a Voronoi-based approximation of the input polyhedral mesh’s medial axis to derive a hidden surface for each fragment. The result is a new watertight polyhedral mesh representing the full fracture component. Results are aquired after a delay sufficiently brief for interactive design. As the size of the input mesh increases, the computation time has shown to grow linearly. A large mesh of 41,000 triangles requires approximately 3.4 seconds to perform a complete fracture of a complex pattern. For a wide variety of practices, the resulting fractures allows users to provide realistic feedback upon the application of extraneous forces.
89

PARIS: A PArallel RSA-Prime InSpection Tool

White, Joseph R. 01 June 2013 (has links) (PDF)
Modern-day computer security relies heavily on cryptography as a means to protect the data that we have become increasingly reliant on. As the Internet becomes more ubiquitous, methods of security must be better than ever. Validation tools can be leveraged to help increase our confidence and accountability for methods we employ to secure our systems. Security validation, however, can be difficult and time-consuming. As our computational ability increases, calculations that were once considered “hard” due to length of computation, can now be done in minutes. We are constantly increasing the size of our keys and attempting to make computations harder to protect our information. This increase in “cracking” difficulty often has the unfortunate side-effect of making validation equally as difficult. We can leverage massive-parallelism and the computational power that is granted by today’s commodity hardware such as GPUs to make checks that would otherwise be impossible to perform, attainable. Our work presents a practical tool for validating RSA keys for poor prime numbers: a fundamental problem that has led to significant security holes, despite the RSA algorithm’s mathematical soundness. Our tool, PARIS, leverages NVIDIA’s CUDA framework to perform a complete set of greatest common divisor calculations between all keys in a provided set. Our implementation offers a 27.5 times speedup using a GTX 480 and 33.9 times speedup using a Tesla K20Xm: both compared to a reference sequential implementation for sets of less than 200000 keys. This level of speedup brings this validation into the realm of practicality due to decreased runtime.
90

Panodepth – Panoramic Monocular Depth Perception Model and Framework

Wong, Adley K 01 December 2022 (has links) (PDF)
Depth perception has become a heavily researched area as companies and researchers are striving towards the development of self-driving cars. Self-driving cars rely on perceiving the surrounding area, which heavily depends on technology capable of providing the system with depth perception capabilities. In this paper, we explore developing a single camera (monocular) depth prediction model that is trained on panoramic depth images. Our model makes novel use of transfer learning efficient encoder models, pre-training on a larger dataset of flat depth images, and optimizing the model for use with a Jetson Nano. Additionally, we present a training and optimization framework to make developing and testing new monocular depth perception models easier and faster. While the model failed to achieve a high frame rate, the framework and models developed are a promising starting place for future work.

Page generated in 0.1298 seconds