• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 24
  • 5
  • 5
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 270
  • 270
  • 216
  • 84
  • 56
  • 51
  • 46
  • 43
  • 39
  • 37
  • 34
  • 34
  • 33
  • 33
  • 30
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
61

Specialized Named Entity Recognition for Breast Cancer Subtyping

Hawblitzel, Griffith Scheyer 01 June 2022 (has links) (PDF)
The amount of data and analysis being published and archived in the biomedical research community is more than can feasibly be sifted through manually, which limits the information an individual or small group can synthesize and integrate into their own research. This presents an opportunity for using automated methods, including Natural Language Processing (NLP), to extract important information from text on various topics. Named Entity Recognition (NER), is one way to automate knowledge extraction of raw text. NER is defined as the task of identifying named entities from text using labels such as people, dates, locations, diseases, and proteins. There are several NLP tools that are designed for entity recognition, but rely on large established corpus for training data. Biomedical research has the potential to guide diagnostic and therapeutic decisions, yet the overwhelming density of publications acts as a barrier to getting these results into a clinical setting. An exceptional example of this is the field of breast cancer biology where over 2 million people are diagnosed worldwide every year and billions of dollars are spent on research. Breast cancer biology literature and research relies on a highly specific domain with unique language and vocabulary, and therefore requires specialized NLP tools which can generate biologically meaningful results. This thesis presents a novel annotation tool, that is optimized for quickly creating training data for spaCy pipelines as well as exploring the viability of said data for analyzing papers with automated processing. Custom pipelines trained on these annotations are shown to be able to recognize custom entities at levels comparable to large corpus based recognition.
62

A Design of a Digital Lockout Tagout System with Machine Learning

Chen, Brandon H 01 December 2022 (has links) (PDF)
Lockout Tagout (LOTO) is a safety procedure instated by the Occupational Safety and Health Administration (OSHA) when doing maintenance on dangerous machinery and hazardous power sources. In this procedure, authorized workers shut off the machinery and use physical locks and tags to prevent operation during maintenance. LOTO has been the industry standard for 32 years since it was instantiated, being used in many different industries such as industrial work, mining, and agriculture. However, LOTO is not without its issues. The LOTO procedure requires employees to be trained and is prone to human error. As well, there is a clash between the technological advancement of machinery and the requirement of physical locks and tags required for LOTO. In this thesis, we propose a digital LOTO system to help streamline the LOTO procedure and increase the safety of the workers with machine learning. We first discuss what LOTO is, its current requirements, limitations, and issues. Then we look at current IoT locks and digital LOTO solutions and compare them to the requirements of traditional LOTO. Then we present our proposed digital LOTO system which will enhance the safety of workers and streamline the LOTO process with machine learning. Our digital LOTO system uses a rule-based system that enforces and streamlines the LOTO procedure and uses machine learning to detect potential violations of LOTO standards. We also validate that our system fulfills the requirements of LOTO and that the combination of machine learning and rule-based systems ensures the safety of workers by detecting violations with high accuracy. Finally, we discuss potential future work and improvements on this system as this thesis is part of a larger collaboration with Chevron, which plans to implement a digital LOTO system in their oil fields.
63

Analysis of System Reliability as a Capital Investment

Williams, Albert J. 01 January 1978 (has links) (PDF)
This report, "Analysis of System Reliability as a Capital Investment", is an analysis of radar system reliability of two similar tracking radar systems as a capital investment. It describes the two tracking radar systems and calculates the mission failures rates based upon field failure data. Additionally, an analysis of a simulation program written in FORTRAN is performed which treats system reliability as a capital investment based on 335 electronic systems that were fabricated with a reliability program versus 564 electronic systems fabricated without a reliability program. The data from the two tracking radar systems, one with reliability program and the other without, is incorporated in the computer program to verify the conclusions of the author of the computer simulation program.
64

Addressing Challenges in Utilizing GPUs for Accelerating Privacy-Preserving Computation

Yudha, Ardhi Wiratama Baskara 01 January 2024 (has links) (PDF)
Cloud computing increasingly handles confidential data, like private inference and query databases. Two strategies are used for secure computation: (1) employing CPU Trusted Execution Environments (TEEs) like AMD SEV, Intel SGX, or ARM TrustZone, and (2) utilizing emerging cryptographic methods like Fully Homomorphic Encryption (FHE) with libraries such as HElib, Microsoft SEAL, and PALISADE. To enhance computation, GPUs are often employed. However, using GPUs to accelerate secure computation introduces challenges addressed in three works. In the first work, we tackle GPU acceleration for secure computation with CPU TEEs. While TEEs perform computations on confidential data, extending their capabilities to GPUs is essential for leveraging their power. Existing approaches assume co-designed CPU-GPU setups, but we contend that co-designing CPU and GPU is difficult to achieve and requires early coordination between CPU and GPU manufacturers. To address this, we propose software-based memory encryption for CPU-GPU TEE co-design via the software layer. Yet, this introduces issues due to AES's 128-bit granularity. We present optimizations to mitigate these problems, resulting in execution time overheads of 1.1\% and 56\% for regular and irregular applications. In the second work, we focus on GPU acceleration for the CPU FHE library HElib, particularly for comparison operations on encrypted data. These operations are vital in Machine Learning, Image Processing, and Private Database Queries, yet their acceleration is often overlooked. We extend HElib to harness GPU acceleration for its resource-intensive components like BluesteinNTT, BluesteinFFT, and Element-wise Operations. Addressing memory separation, dynamic allocation, and parallelization challenges, we employ several optimizations to address these challenges. With all optimizations and hybrid CPU-GPU parallelism, we achieve a 11.1$\times$ average speedup over the state-of-the-art CPU FHE library. In our latest work, we concentrate on minimizing the ciphertext size by leveraging insights from algorithms, data access patterns, and application requirements to reduce the operational footprint of an FHE application, particularly targeting Neural Network inference tasks. Through the implementation of all three levels of ciphertext compression (precision reduction in comparisons, optimization of access patterns, and adjustments in data layout), we achieve a remarkable 5.6$\times$ speedup compared to the state-of-the-art GPU implementation in 100x\cite{100x}. Overcoming these challenges is crucial for achieving significant GPU-driven performance improvements. This dissertation provides solutions to these hurdles, aiming to facilitate GPU-based acceleration of confidential data computation.
65

CUDA Web API Remote Execution of CUDA Kernels Using Web Services

Becker, Massimo J 01 June 2012 (has links) (PDF)
Massively parallel programming is an increasingly growing field with the recent introduction of general purpose GPU computing. Modern graphics processors from NVIDIA and AMD have massively parallel architectures that can be used for such applications as 3D rendering, financial analysis, physics simulations, and biomedical analysis. These massively parallel systems are exposed to programmers through in- terfaces such as NVIDIAs CUDA, OpenCL, and Microsofts C++ AMP. These frame- works expose functionality using primarily either C or C++. In order to use these massively parallel frameworks, programs being implemented must be run on machines equipped with massively parallel hardware. These requirements limit the flexibility of new massively parallel systems. This paper explores the possibility that massively parallel systems can be exposed through web services in order to facilitate using these architectures from remote systems written in other languages. To explore this possi- bility, an architecture is put forth with requirements and high level design for building a web service that can overcome limitations of existing tools and frameworks. The CUDA Web API is built using Python, PyCUDA, NumPy, JSON, and Django to meet the requirements set forth. Additionaly, a client application, CUDA Cloud, is built and serves as an example web service client. The CUDA Web API’s performance and its functionality is validated using a common matrix multiplication algorithm implemented using different languages and tools. Performance tests show runtime improvements for larger datasets using the CUDA Web API for remote CUDA kernel execution over serial implementations. This paper concludes that existing limitations associated with GPGPU usage can be overcome with the specified architecture.
66

Rethinking the I/O Stack for Persistent Memory

Chowdhury, Mohammad Ataur Rahman 28 March 2018 (has links)
Modern operating systems have been designed around the hypotheses that (a) memory is both byte-addressable and volatile and (b) storage is block addressable and persistent. The arrival of new Persistent Memory (PM) technologies, has made these assumptions obsolete. Despite much of the recent work in this space, the need for consistently sharing PM data across multiple applications remains an urgent, unsolved problem. Furthermore, the availability of simple yet powerful operating system support remains elusive. In this dissertation, we propose and build The Region System – a high-performance operating system stack for PM that implements usable consistency and persistence for application data. The region system provides support for consistently mapping and sharing data resident in PM across user application address spaces. The region system creates a novel IPI based PMSYNC operation, which ensures atomic persistence of mapped pages across multiple address spaces. This allows applications to consume PM using the well understood and much desired memory like model with an easy-to-use interface. Next, we propose a metadata structure without any redundant metadata to reduce CPU cache flushes. The high-performance design minimizes the expensive PM ordering and durability operations by embracing a minimalistic approach to metadata construction and management. To strengthen the case for the region system, in this dissertation, we analyze different types of applications to identify their dependence on memory mapped data usage, and propose user level libraries LIBPM-R and LIBPMEMOBJ-R to support shared persistent containers. The user level libraries along with the region system demonstrate a comprehensive end-to-end software stack for consuming the PM devices.
67

Sustainable Resource Management for Cloud Data Centers

Mahmud, A. S. M. Hasan 15 June 2016 (has links)
In recent years, the demand for data center computing has increased significantly due to the growing popularity of cloud applications and Internet-based services. Today's large data centers host hundreds of thousands of servers and the peak power rating of a single data center may even exceed 100MW. The combined electricity consumption of global data centers accounts for about 3% of worldwide production, raising serious concerns about their carbon footprint. The utility providers and governments are consistently pressuring data center operators to reduce their carbon footprint and energy consumption. While these operators (e.g., Apple, Facebook, and Google) have taken steps to reduce their carbon footprints (e.g., by installing on-site/off-site renewable energy facility), they are aggressively looking for new approaches that do not require expensive hardware installation or modification. This dissertation focuses on developing algorithms and systems to improve the sustainability in data centers without incurring significant additional operational or setup costs. In the first part, we propose a provably-efficient resource management solution for a self-managed data center to cap and reduce the carbon emission while maintaining satisfactory service performance. Our solution reduces the carbon emission of a self-managed data center to net-zero level and achieves carbon neutrality. In the second part, we consider minimizing the carbon emission in a hybrid data center infrastructure that includes geographically distributed self-managed and colocation data centers. This segment identifies and addresses the challenges of resource management in a hybrid data center infrastructure and proposes an efficient distributed solution to optimize the workload and resource allocation jointly in both self-managed and colocation data centers. In the final part, we explore sustainable resource management from cloud service users' point of view. A cloud service user purchases computing resources (e.g., virtual machines) from the service provider and does not have direct control over the carbon emission of the service provider's data center. Our proposed solution encourages a user to take part in sustainable (both economical and environmental) computing by limiting its spending on cloud resource purchase while satisfying its application performance requirements.
68

Techniques for Efficient Execution of Large-Scale Scientific Workflows in Distributed Environments

Kalayci, Selim 14 November 2014 (has links)
Scientific exploration demands heavy usage of computational resources for large-scale and deep analysis in many different fields. The complexity or the sheer scale of the computational studies can sometimes be encapsulated in the form of a workflow that is made up of numerous dependent components. Due to its decomposable and parallelizable nature, different components of a scientific workflow may be mapped over a distributed resource infrastructure to reduce time to results. However, the resource infrastructure may be heterogeneous, dynamic, and under diverse administrative control. Workflow management tools are utilized to help manage and deal with various aspects in the lifecycle of such complex applications. One particular and fundamental aspect that has to be dealt with as smooth and efficient as possible is the run-time coordination of workflow activities (i.e. workflow orchestration). Our efforts in this study are focused on improving the workflow orchestration process in such dynamic and distributed resource environments. We tackle three main aspects of this process and provide contributions in each of them. Our first contribution involves increasing the scalability and site autonomy in situations where the mapped components of a workflow span across several heterogeneous administrative domains. We devise and implement a generic decentralization framework for orchestration of workflows under such conditions. Our second contribution is involved with addressing the issues that arise due to the dynamic nature of such environments. We provide generic adaptation mechanisms that are highly transparent and also substantially less intrusive with respect to the rest of the workflow in execution. Our third contribution is to improve the efficiency of orchestration of large-scale parameter-sweep workflows. By exploiting their specific characteristics, we provide generic optimization patterns that are applicable to most instances of such workflows. We also discuss implementation issues and details that arise as we provide our contributions in each situation.
69

Real-Time Scheduling of Embedded Applications on Multi-Core Platforms

Fan, Ming 21 March 2014 (has links)
For the past several decades, we have experienced the tremendous growth, in both scale and scope, of real-time embedded systems, thanks largely to the advances in IC technology. However, the traditional approach to get performance boost by increasing CPU frequency has been a way of past. Researchers from both industry and academia are turning their focus to multi-core architectures for continuous improvement of computing performance. In our research, we seek to develop efficient scheduling algorithms and analysis methods in the design of real-time embedded systems on multi-core platforms. Real-time systems are the ones with the response time as critical as the logical correctness of computational results. In addition, a variety of stringent constraints such as power/energy consumption, peak temperature and reliability are also imposed to these systems. Therefore, real-time scheduling plays a critical role in design of such computing systems at the system level. We started our research by addressing timing constraints for real-time applications on multi-core platforms, and developed both partitioned and semi-partitioned scheduling algorithms to schedule fixed priority, periodic, and hard real-time tasks on multi-core platforms. Then we extended our research by taking temperature constraints into consideration. We developed a closed-form solution to capture temperature dynamics for a given periodic voltage schedule on multi-core platforms, and also developed three methods to check the feasibility of a periodic real-time schedule under peak temperature constraint. We further extended our research by incorporating the power/energy constraint with thermal awareness into our research problem. We investigated the energy estimation problem on multi-core platforms, and developed a computation efficient method to calculate the energy consumption for a given voltage schedule on a multi-core platform. In this dissertation, we present our research in details and demonstrate the effectiveness and efficiency of our approaches with extensive experimental results.
70

New Method for Robotic Systems Architecture Analysis, Modeling, and Design

Li, Lu 28 August 2019 (has links)
No description available.

Page generated in 0.0538 seconds