• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 5
  • Tagged with
  • 167
  • 167
  • 167
  • 60
  • 37
  • 31
  • 31
  • 31
  • 29
  • 27
  • 26
  • 23
  • 20
  • 20
  • 19
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Modifying Instruction Sets In The Gem5 Simulator To Support Fault Tolerant Designs

Zhang, Chuan 23 November 2015 (has links)
Traditional fault tolerant techniques such as hardware or time redundancy incur high overhead and are inefficient for checking arithmetic operations. Our objective is to study an alternative approach of adding new instructions to check arithmetic operations. These checking instructions either rely on error detecting code or calculate approximate results and consequently, consume much less execution time. To evaluate the effectiveness of such an approach we wish to modify several benchmarks to use checking instructions and run simulation experiments to find out their execution time and memory usage. However, the checking instructions are not included in the instruction set and as a result, are not supported by current architecture simulators. Therefore, another objective of this thesis is to develop a method for inserting new instructions in the Gem5 simulator and cross compiler. The insertion process is integrated into a software tool called Gtool. Gtool can add an error checking capability to C programs by using the new instructions.
32

Compacting Loads and Stores for Code Size Reduction

Asay, Isaac 01 March 2014 (has links)
It is important for compilers to generate executable code that is as small as possible, particularly when generating code for embedded systems. One method of reducing code size is to use instruction set architectures (ISAs) that support combining multiple operations into single operations. The ARM ISA allows for combining multiple memory operations to contiguous memory addresses into a single operation. The LLVM compiler contains a specific memory optimization to perform this combining of memory operations, called ARMLoadStoreOpt. This optimization, however, relies on another optimization (ARMPreAllocLoadStoreOpt) to move eligible memory operations into proximity in order to perform properly. This mover optimization occurs before register allocation, while ARMLoadStoreOpt occurs after register allocation. This thesis implements a similar mover optimization (called MagnetPass) after register allocation is performed, and compares this implementation with the existing optimization. While in most cases the two optimizations provide comparable results, our implementation in its current state requires some improvements before it will be a viable alternative to the existing optimization. Specifically, the algorithm will need to be modified to reduce computational complexity, and our implementation will need to take care not to interfere with other LLVM optimizations.
33

Virtual Reality Engine Development

Varahamurthy, Varun 01 June 2014 (has links)
With the advent of modern graphics and computing hardware and cheaper sensor and display technologies, virtual reality is becoming increasingly popular in the fields of gaming, therapy, training and visualization. Earlier attempts at popularizing VR technology were plagued by issues of cost, portability and marketability to the general public. Modern screen technologies make it possible to produce cheap, light head-mounted displays (HMDs) like the Oculus Rift, and modern GPUs make it possible to create and deliver a seamless real-time 3D experience to the user. 3D sensing has found an application in virtual and augmented reality as well, allowing for a higher level of interaction between the real and the simulated. There are still issues that persist, however. Many modern graphics/game engines still do not provide developers with an intuitive or adaptable interface to incorporate these new technologies. Those that do, tend to think of VR as a novelty afterthought, and even then only provide tailor-made extensions for specific hardware. The goal of this paper is to design and implement a functional, general-purpose VR engine using abstract interfaces for much of the hardware components involved to allow for easy extensibility for the developer.
34

A Data-Driven Approach to Cubesat Health Monitoring

Singh, Serbinder 01 June 2017 (has links)
Spacecraft health monitoring is essential to ensure that a spacecraft is operating properly and has no anomalies that could jeopardize its mission. Many of the current methods of monitoring system health are difficult to use as the complexity of spacecraft increase, and are in many cases impractical on CubeSat satellites which have strict size and resource limitations. To overcome these problems, new data-driven techniques such as Inductive Monitoring System (IMS), use data mining and machine learning on archived system telemetry to create models that characterize nominal system behavior. The models that IMS creates are in the form of clusters that capture the relationship between a set of sensors in time series data. Each of these clusters define a nominal operating state of the satellite and the range of sensor values that represent it. These characterizations can then be autonomously compared against real-time telemetry on-board the spacecraft to determine if the spacecraft is operating nominally. This thesis presents an adaption of IMS to create a spacecraft health monitoring system for CubeSat missions developed by the PolySat lab. This system is integrated into PolySat's flight software and provides real time health monitoring of the spacecraft during its mission. Any anomalies detected are reported and further analysis can be done to determine the cause. The system can also be used for the analysis of archived events. The IMS algorithms used by the system were validated, and ground testing was done to determine the performance, reliability, and accuracy of the system. The system was successful in the detection and identification of known anomalies in archived flight telemetry from the IPEX mission. In addition, real-time monitoring performed on the satellite yielded great results that give us confidence in the use of this system in all future missions.
35

Travel time prediction using machine learning

Nampalli, Vignaan Vardhan 08 August 2023 (has links) (PDF)
With the rapid growth of urban populations and increasing vehicular traffic, congestion has become a major challenge for transportation systems worldwide. Accurate estimation of travel time plays a crucial role in mitigating congestion and enhancing traffic management. This research focuses on developing a novel methodology that utilizes machine learning models to estimate travel time using real-time traffic data collected through Bluetooth sensors deployed at traffic intersections. The research compares five different prediction systems for replicating travel time estimation, evaluating their performance and accuracy. The results highlight the effectiveness of the machine learning models in accurately predicting travel time. Lastly, the research explores the creation of a model specifically designed to predict the travel time during peak hours, considering the impact of traffic lights on travel time between intersections. The findings of this study contribute to the development of efficient and reliable travel time prediction systems, enabling commuters to make informed decisions and improving traffic management strategies.
36

Specialized Named Entity Recognition for Breast Cancer Subtyping

Hawblitzel, Griffith Scheyer 01 June 2022 (has links) (PDF)
The amount of data and analysis being published and archived in the biomedical research community is more than can feasibly be sifted through manually, which limits the information an individual or small group can synthesize and integrate into their own research. This presents an opportunity for using automated methods, including Natural Language Processing (NLP), to extract important information from text on various topics. Named Entity Recognition (NER), is one way to automate knowledge extraction of raw text. NER is defined as the task of identifying named entities from text using labels such as people, dates, locations, diseases, and proteins. There are several NLP tools that are designed for entity recognition, but rely on large established corpus for training data. Biomedical research has the potential to guide diagnostic and therapeutic decisions, yet the overwhelming density of publications acts as a barrier to getting these results into a clinical setting. An exceptional example of this is the field of breast cancer biology where over 2 million people are diagnosed worldwide every year and billions of dollars are spent on research. Breast cancer biology literature and research relies on a highly specific domain with unique language and vocabulary, and therefore requires specialized NLP tools which can generate biologically meaningful results. This thesis presents a novel annotation tool, that is optimized for quickly creating training data for spaCy pipelines as well as exploring the viability of said data for analyzing papers with automated processing. Custom pipelines trained on these annotations are shown to be able to recognize custom entities at levels comparable to large corpus based recognition.
37

A Design of a Digital Lockout Tagout System with Machine Learning

Chen, Brandon H 01 December 2022 (has links) (PDF)
Lockout Tagout (LOTO) is a safety procedure instated by the Occupational Safety and Health Administration (OSHA) when doing maintenance on dangerous machinery and hazardous power sources. In this procedure, authorized workers shut off the machinery and use physical locks and tags to prevent operation during maintenance. LOTO has been the industry standard for 32 years since it was instantiated, being used in many different industries such as industrial work, mining, and agriculture. However, LOTO is not without its issues. The LOTO procedure requires employees to be trained and is prone to human error. As well, there is a clash between the technological advancement of machinery and the requirement of physical locks and tags required for LOTO. In this thesis, we propose a digital LOTO system to help streamline the LOTO procedure and increase the safety of the workers with machine learning. We first discuss what LOTO is, its current requirements, limitations, and issues. Then we look at current IoT locks and digital LOTO solutions and compare them to the requirements of traditional LOTO. Then we present our proposed digital LOTO system which will enhance the safety of workers and streamline the LOTO process with machine learning. Our digital LOTO system uses a rule-based system that enforces and streamlines the LOTO procedure and uses machine learning to detect potential violations of LOTO standards. We also validate that our system fulfills the requirements of LOTO and that the combination of machine learning and rule-based systems ensures the safety of workers by detecting violations with high accuracy. Finally, we discuss potential future work and improvements on this system as this thesis is part of a larger collaboration with Chevron, which plans to implement a digital LOTO system in their oil fields.
38

Optimizing Lempel-Ziv Factorization for the GPU Architecture

Ching, Bryan 01 June 2014 (has links) (PDF)
Lossless data compression is used to reduce storage requirements, allowing for the relief of I/O channels and better utilization of bandwidth. The Lempel-Ziv lossless compression algorithms form the basis for many of the most commonly used compression schemes. General purpose computing on graphic processing units (GPGPUs) allows us to take advantage of the massively parallel nature of GPUs for computations other that their original purpose of rendering graphics. Our work targets the use of GPUs for general lossless data compression. Specifically, we developed and ported an algorithm that constructs the Lempel-Ziv factorization directly on the GPU. Our implementation bypasses the sequential nature of the LZ factorization and attempts to compute the factorization in parallel. By breaking down the LZ factorization into what we call the PLZ, we are able to outperform the fastest serial CPU implementations by up to 24x and perform comparatively to a parallel multicore CPU implementation. To achieve these speeds, our implementation outputted LZ factorizations that were on average only 0.01 percent greater than the optimal solution that what could be computed sequentially. We are also able to reevaluate the fastest GPU suffix array construction algorithm, which is needed to compute the LZ factorization. We are able to find speedups of up to 5x over the fastest CPU implementations.
39

Astro – A Low-Cost, Low-Power Cluster for CPU-GPU Hybrid Computing Using the Jetson TK1

Sheen, Sean Kai 01 June 2016 (has links) (PDF)
With the rising costs of large scale distributed systems many researchers have began looking at utilizing low power architectures for clusters. In this paper, we describe our Astro cluster, which consists of 46 NVIDIA Jetson TK1 nodes each equipped with an ARM Cortex A15 CPU, 192 core Kepler GPU, 2 GB of RAM, and 16 GB of flash storage. The cluster has a number of advantages when compared to conventional clusters including lower power usage, ambient cooling, shared memory between the CPU and GPU, and affordability. The cluster is built using commodity hardware and can be setup for relatively low costs while providing up to 190 single precision GFLOPS of computing power per node due to its combined GPU/CPU architecture. The cluster currently uses one 48-port Gigabit Ethernet switch and runs Linux for Tegra, a modified version of Ubuntu provided by NVIDIA as its operating system. Common file systems such as PVFS, Ceph, and NFS are supported by the cluster and benchmarks such as HPL, LAPACK, and LAMMPS are used to evaluate the system. At peak performance, the cluster is able to produce 328 GFLOPS of double precision and a peak of 810W using the LINPACK benchmark placing the cluster at 324th place on the Green500. Single precision benchmarks result in a peak performance of 6800 GFLOPs. The Astro cluster aims to be a proof-of-concept for future low power clusters that utilize a similar architecture. The cluster is installed with many of the same applications used by top supercomputers and is validated using the several standard supercomputing benchmarks. We show that with the rise of low-power CPUs and GPUs, and the need for lower server costs, this cluster provides insight into how ARM and CPU-GPU hybrid chips will perform in high-performance computing.
40

REST API to Access and Manage Geospatial Pipeline Integrity Data

Francis, Alexandra Michelle 01 June 2015 (has links) (PDF)
Today’s economy and infrastructure is dependent on raw natural resources, like crude oil and natural gases, that are optimally transported through a net- work of hundreds of thousands of miles of pipelines throughout America[28]. A damaged pipe can negatively a↵ect thousands of homes and businesses so it is vital that they are monitored and quickly repaired[1]. Ideally, pipeline operators are able to detect damages before they occur, but ensuring the in- tegrity of the vast amount of pipes is unrealistic and would take an impractical amount of time and manpower[1]. Natural disasters, like earthquakes, as well as construction are just two of the events that could potentially threaten the integrity of pipelines. Due to the diverse collection of data sources, the necessary geospatial data is scat- tered across di↵erent physical locations, stored in di↵erent formats, and owned by di↵erent organizations. Pipeline companies do not have the resources to manually gather all input factors to make a meaningful analysis of the land surrounding a pipe. Our solution to this problem involves creating a single, centralized system that can be queried to get all necessary geospatial data and related informa- tion in a standardized and desirable format. The service simplifies client-side computation time by allowing our system to find, ingest, parse, and store the data from potentially hundreds of repositories in varying formats. An online web service fulfills all of the requirements and allows for easy remote access to do critical analysis of the data through computer based decision support systems (DSS). Our system, REST API for Pipeline Integrity Data (RAPID), is a multi- tenant REST API that utilizes HTTP protocol to provide a online and intuitive set of functions for DSS. RAPID’s API allows DSS to access and manage data stored in a geospatial database with a supported Django web framework. Full documentation of the design and implementation of RAPID’s API are detailed in this thesis document, supplemented with some background and validation of the completed system.

Page generated in 0.1396 seconds