• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 22
  • 5
  • 5
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 263
  • 263
  • 211
  • 81
  • 56
  • 50
  • 45
  • 43
  • 37
  • 36
  • 33
  • 33
  • 32
  • 29
  • 28
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
61

Specialized Named Entity Recognition for Breast Cancer Subtyping

Hawblitzel, Griffith Scheyer 01 June 2022 (has links) (PDF)
The amount of data and analysis being published and archived in the biomedical research community is more than can feasibly be sifted through manually, which limits the information an individual or small group can synthesize and integrate into their own research. This presents an opportunity for using automated methods, including Natural Language Processing (NLP), to extract important information from text on various topics. Named Entity Recognition (NER), is one way to automate knowledge extraction of raw text. NER is defined as the task of identifying named entities from text using labels such as people, dates, locations, diseases, and proteins. There are several NLP tools that are designed for entity recognition, but rely on large established corpus for training data. Biomedical research has the potential to guide diagnostic and therapeutic decisions, yet the overwhelming density of publications acts as a barrier to getting these results into a clinical setting. An exceptional example of this is the field of breast cancer biology where over 2 million people are diagnosed worldwide every year and billions of dollars are spent on research. Breast cancer biology literature and research relies on a highly specific domain with unique language and vocabulary, and therefore requires specialized NLP tools which can generate biologically meaningful results. This thesis presents a novel annotation tool, that is optimized for quickly creating training data for spaCy pipelines as well as exploring the viability of said data for analyzing papers with automated processing. Custom pipelines trained on these annotations are shown to be able to recognize custom entities at levels comparable to large corpus based recognition.
62

A Design of a Digital Lockout Tagout System with Machine Learning

Chen, Brandon H 01 December 2022 (has links) (PDF)
Lockout Tagout (LOTO) is a safety procedure instated by the Occupational Safety and Health Administration (OSHA) when doing maintenance on dangerous machinery and hazardous power sources. In this procedure, authorized workers shut off the machinery and use physical locks and tags to prevent operation during maintenance. LOTO has been the industry standard for 32 years since it was instantiated, being used in many different industries such as industrial work, mining, and agriculture. However, LOTO is not without its issues. The LOTO procedure requires employees to be trained and is prone to human error. As well, there is a clash between the technological advancement of machinery and the requirement of physical locks and tags required for LOTO. In this thesis, we propose a digital LOTO system to help streamline the LOTO procedure and increase the safety of the workers with machine learning. We first discuss what LOTO is, its current requirements, limitations, and issues. Then we look at current IoT locks and digital LOTO solutions and compare them to the requirements of traditional LOTO. Then we present our proposed digital LOTO system which will enhance the safety of workers and streamline the LOTO process with machine learning. Our digital LOTO system uses a rule-based system that enforces and streamlines the LOTO procedure and uses machine learning to detect potential violations of LOTO standards. We also validate that our system fulfills the requirements of LOTO and that the combination of machine learning and rule-based systems ensures the safety of workers by detecting violations with high accuracy. Finally, we discuss potential future work and improvements on this system as this thesis is part of a larger collaboration with Chevron, which plans to implement a digital LOTO system in their oil fields.
63

Analysis of System Reliability as a Capital Investment

Williams, Albert J. 01 January 1978 (has links) (PDF)
This report, "Analysis of System Reliability as a Capital Investment", is an analysis of radar system reliability of two similar tracking radar systems as a capital investment. It describes the two tracking radar systems and calculates the mission failures rates based upon field failure data. Additionally, an analysis of a simulation program written in FORTRAN is performed which treats system reliability as a capital investment based on 335 electronic systems that were fabricated with a reliability program versus 564 electronic systems fabricated without a reliability program. The data from the two tracking radar systems, one with reliability program and the other without, is incorporated in the computer program to verify the conclusions of the author of the computer simulation program.
64

CUDA Web API Remote Execution of CUDA Kernels Using Web Services

Becker, Massimo J 01 June 2012 (has links) (PDF)
Massively parallel programming is an increasingly growing field with the recent introduction of general purpose GPU computing. Modern graphics processors from NVIDIA and AMD have massively parallel architectures that can be used for such applications as 3D rendering, financial analysis, physics simulations, and biomedical analysis. These massively parallel systems are exposed to programmers through in- terfaces such as NVIDIAs CUDA, OpenCL, and Microsofts C++ AMP. These frame- works expose functionality using primarily either C or C++. In order to use these massively parallel frameworks, programs being implemented must be run on machines equipped with massively parallel hardware. These requirements limit the flexibility of new massively parallel systems. This paper explores the possibility that massively parallel systems can be exposed through web services in order to facilitate using these architectures from remote systems written in other languages. To explore this possi- bility, an architecture is put forth with requirements and high level design for building a web service that can overcome limitations of existing tools and frameworks. The CUDA Web API is built using Python, PyCUDA, NumPy, JSON, and Django to meet the requirements set forth. Additionaly, a client application, CUDA Cloud, is built and serves as an example web service client. The CUDA Web API’s performance and its functionality is validated using a common matrix multiplication algorithm implemented using different languages and tools. Performance tests show runtime improvements for larger datasets using the CUDA Web API for remote CUDA kernel execution over serial implementations. This paper concludes that existing limitations associated with GPGPU usage can be overcome with the specified architecture.
65

Rethinking the I/O Stack for Persistent Memory

Chowdhury, Mohammad Ataur Rahman 28 March 2018 (has links)
Modern operating systems have been designed around the hypotheses that (a) memory is both byte-addressable and volatile and (b) storage is block addressable and persistent. The arrival of new Persistent Memory (PM) technologies, has made these assumptions obsolete. Despite much of the recent work in this space, the need for consistently sharing PM data across multiple applications remains an urgent, unsolved problem. Furthermore, the availability of simple yet powerful operating system support remains elusive. In this dissertation, we propose and build The Region System – a high-performance operating system stack for PM that implements usable consistency and persistence for application data. The region system provides support for consistently mapping and sharing data resident in PM across user application address spaces. The region system creates a novel IPI based PMSYNC operation, which ensures atomic persistence of mapped pages across multiple address spaces. This allows applications to consume PM using the well understood and much desired memory like model with an easy-to-use interface. Next, we propose a metadata structure without any redundant metadata to reduce CPU cache flushes. The high-performance design minimizes the expensive PM ordering and durability operations by embracing a minimalistic approach to metadata construction and management. To strengthen the case for the region system, in this dissertation, we analyze different types of applications to identify their dependence on memory mapped data usage, and propose user level libraries LIBPM-R and LIBPMEMOBJ-R to support shared persistent containers. The user level libraries along with the region system demonstrate a comprehensive end-to-end software stack for consuming the PM devices.
66

Sustainable Resource Management for Cloud Data Centers

Mahmud, A. S. M. Hasan 15 June 2016 (has links)
In recent years, the demand for data center computing has increased significantly due to the growing popularity of cloud applications and Internet-based services. Today's large data centers host hundreds of thousands of servers and the peak power rating of a single data center may even exceed 100MW. The combined electricity consumption of global data centers accounts for about 3% of worldwide production, raising serious concerns about their carbon footprint. The utility providers and governments are consistently pressuring data center operators to reduce their carbon footprint and energy consumption. While these operators (e.g., Apple, Facebook, and Google) have taken steps to reduce their carbon footprints (e.g., by installing on-site/off-site renewable energy facility), they are aggressively looking for new approaches that do not require expensive hardware installation or modification. This dissertation focuses on developing algorithms and systems to improve the sustainability in data centers without incurring significant additional operational or setup costs. In the first part, we propose a provably-efficient resource management solution for a self-managed data center to cap and reduce the carbon emission while maintaining satisfactory service performance. Our solution reduces the carbon emission of a self-managed data center to net-zero level and achieves carbon neutrality. In the second part, we consider minimizing the carbon emission in a hybrid data center infrastructure that includes geographically distributed self-managed and colocation data centers. This segment identifies and addresses the challenges of resource management in a hybrid data center infrastructure and proposes an efficient distributed solution to optimize the workload and resource allocation jointly in both self-managed and colocation data centers. In the final part, we explore sustainable resource management from cloud service users' point of view. A cloud service user purchases computing resources (e.g., virtual machines) from the service provider and does not have direct control over the carbon emission of the service provider's data center. Our proposed solution encourages a user to take part in sustainable (both economical and environmental) computing by limiting its spending on cloud resource purchase while satisfying its application performance requirements.
67

Techniques for Efficient Execution of Large-Scale Scientific Workflows in Distributed Environments

Kalayci, Selim 14 November 2014 (has links)
Scientific exploration demands heavy usage of computational resources for large-scale and deep analysis in many different fields. The complexity or the sheer scale of the computational studies can sometimes be encapsulated in the form of a workflow that is made up of numerous dependent components. Due to its decomposable and parallelizable nature, different components of a scientific workflow may be mapped over a distributed resource infrastructure to reduce time to results. However, the resource infrastructure may be heterogeneous, dynamic, and under diverse administrative control. Workflow management tools are utilized to help manage and deal with various aspects in the lifecycle of such complex applications. One particular and fundamental aspect that has to be dealt with as smooth and efficient as possible is the run-time coordination of workflow activities (i.e. workflow orchestration). Our efforts in this study are focused on improving the workflow orchestration process in such dynamic and distributed resource environments. We tackle three main aspects of this process and provide contributions in each of them. Our first contribution involves increasing the scalability and site autonomy in situations where the mapped components of a workflow span across several heterogeneous administrative domains. We devise and implement a generic decentralization framework for orchestration of workflows under such conditions. Our second contribution is involved with addressing the issues that arise due to the dynamic nature of such environments. We provide generic adaptation mechanisms that are highly transparent and also substantially less intrusive with respect to the rest of the workflow in execution. Our third contribution is to improve the efficiency of orchestration of large-scale parameter-sweep workflows. By exploiting their specific characteristics, we provide generic optimization patterns that are applicable to most instances of such workflows. We also discuss implementation issues and details that arise as we provide our contributions in each situation.
68

Real-Time Scheduling of Embedded Applications on Multi-Core Platforms

Fan, Ming 21 March 2014 (has links)
For the past several decades, we have experienced the tremendous growth, in both scale and scope, of real-time embedded systems, thanks largely to the advances in IC technology. However, the traditional approach to get performance boost by increasing CPU frequency has been a way of past. Researchers from both industry and academia are turning their focus to multi-core architectures for continuous improvement of computing performance. In our research, we seek to develop efficient scheduling algorithms and analysis methods in the design of real-time embedded systems on multi-core platforms. Real-time systems are the ones with the response time as critical as the logical correctness of computational results. In addition, a variety of stringent constraints such as power/energy consumption, peak temperature and reliability are also imposed to these systems. Therefore, real-time scheduling plays a critical role in design of such computing systems at the system level. We started our research by addressing timing constraints for real-time applications on multi-core platforms, and developed both partitioned and semi-partitioned scheduling algorithms to schedule fixed priority, periodic, and hard real-time tasks on multi-core platforms. Then we extended our research by taking temperature constraints into consideration. We developed a closed-form solution to capture temperature dynamics for a given periodic voltage schedule on multi-core platforms, and also developed three methods to check the feasibility of a periodic real-time schedule under peak temperature constraint. We further extended our research by incorporating the power/energy constraint with thermal awareness into our research problem. We investigated the energy estimation problem on multi-core platforms, and developed a computation efficient method to calculate the energy consumption for a given voltage schedule on a multi-core platform. In this dissertation, we present our research in details and demonstrate the effectiveness and efficiency of our approaches with extensive experimental results.
69

New Method for Robotic Systems Architecture Analysis, Modeling, and Design

Li, Lu 28 August 2019 (has links)
No description available.
70

Extending Service Oriented Architecture Using Generic Service Representatives

Najafi, Mehran 04 1900 (has links)
<p>Service-Oriented Architecture (SOA) focuses on dividing the enterprise application layer of an enterprise system into components (as services) that have direct relationships with the business functionality of the enterprise. Web services, which are based on message exchanges, are the most widely adopted SOA technology. Web services provide web-accessible programs and devices that have been widely promoted for cloud computing environments. However, different types of web services are required to model actual services in the business domain. Particularly, enterprises (business providers such as banks, health care, and insurance companies) usually send their agents or other personnel (e.g., representatives, installers, maintainers, and trainers) to client sides to perform required services. An enterprise agent can be modeled as a software agent - a computer program that cannot be transmitted efficiently by communication messages. Lacking an efficient way to model the transmission of enterprise agents in traditional message based technologies restricts the application and usage of service-oriented architectures. The central problem addressed in this thesis is the need to develop an efficient SOA model for enterprise agents that will enable service providers to process client data locally at the client side.</p> <p>To address the research problem, the thesis proposes to model enterprise agents in SOA with a generic software agent called the Service Representative. This is a generic software agent which stays at the client side and can be customized by different service providers to process client data locally. Moreover, to employ a service representative, the thesis proposes a new type of web services called Task Services. While a traditional web service, called Data Service, processes client data completely at the server side, a task service is a web service with the capability of processing client data and resources partially or completely at the client side, using a Service Representative. Each task service assigns a task with three components to the generic service representative: task model, task knowledge, and task data. The task components are mapped to business components such as business process models, business rules and actions, and business data, where they can be efficiently transmitted by service messages.</p> <p>The combination of a service representative and task services provides an executable platform for service providers at the client side. Moreover, the client does not need to reveal its data, and hence privacy and security are maintained. Large volume client data is processed locally, causing less network traffic. Finally, real-time and event-triggered web services can be developed, based on the proposed approach.</p> <p>The main contributions and novelty of this research are: i) a domain independent computational model of enterprise agents in SOA to support a wide variety of client-processing tasks, ii) client- side web services which are compatible with typical server-side web services and comparable to other client-side processing technologies, iii) extensions of the SOA architecture by adding novel generic components including the service representative, the competition desk, and the service composition certifier, iv) provision of a formal model of client-side and server-side web services based on their construction of business components, v) empirical evaluations of the web service model in a number of different applications, using a prototype system, and vi) the application of the developed model to a number of target domains including the healthcare field. Furthermore, because client-side and server-side web services are complementary, a decision support model is provided that will assist service developers to decide upon the best service type for a web service.</p> / Doctor of Science (PhD)

Page generated in 0.0614 seconds