• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 253
  • 51
  • 34
  • 27
  • 27
  • 8
  • 4
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 502
  • 502
  • 115
  • 79
  • 76
  • 68
  • 68
  • 57
  • 47
  • 44
  • 36
  • 36
  • 36
  • 35
  • 33
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
231

Application-Aware Resource Management

Ghadse, Sheetal Prakash 21 March 2011 (has links)
No description available.
232

Exploring shopfloor data collection challenges within ETO and its impact on Production Planning and Control : Master thesis

Kivuto, Florian Alain January 2022 (has links)
The increasing trends towards customization which has also been emphasized as a competitive advantage have resulted in engineering-to-order (ETO) companies having a leading role in many industries. However, in parallel with this digital information and technological advancement is rising, and companies are aiming to improve their processes to consequently reach greater success in their operations. Recent research has discussed Industry 4.0 and improved production planning methods in the manufacturing industry. Thus, manufacturing companies are striving to increase their efficiency levels and readily available data has been one of the most important and common denominators for this transformation, in fact, considered a necessity to survive in the current highly competitive market. Despite this, data collection which is a crucial part remains unexplored by academia and especially in ETO but also production planning methodologies as well as tool considering that they have a complex production process and relies heavily on manual labor from skilled operators.  Thus, this thesis sets out to investigate and explore shopfloor data collection and production planning and control (PPC) in the ETO environment. The research approach used here was a case study at an electrical transformer manufacturing plant located in Sweden. However, this study has also considered benchmarking companies as well to achieve the result of this study. Data collection techniques used were interviews, observation, and an extensive literature review that has guided the realization of the aim and gave a base to the suggested improvement for the problems identified. The findings of this master thesis illustrated that there is a lack of detailed plans, detailed information, manual effort, and lack of IT development which negatively impacts the performance of PPC. Furthermore, this had a strong correlation with the still manual shopfloor data collection that many ETO companies rely on. Principles that can be employed to mitigate the effects are further discussed in detail throughout the thesis.
233

A Proposal of a Mobile Health Data Collection and Reporting System for the Developing World

Shao, Deo, SHAO, DEO January 2012 (has links)
Data collection is one of the important components of public health systems. Decision makers, policy makers and health service providers need accurate and timely data in order to improve the quality of their services. The rapidly growing use of mobile technologies has increased pressure on the demand for mobile-based data collection solutions to bridge the information gaps in the health sector of the developing world. This study reviews existing health data collection systems and the available open source tools that can be used to improve these systems. We further propose a prototype using open source data collection frameworks to test their feasibility in improving the health data collection in the developing world context. We focused on the statistical health data, which are reported to secondary health facilities from primary health facilities. The proposed prototype offers ways of collecting health data through mobile phones and visualizes the collected data in a web application. Finally, we conducted a qualitative study to assess challenges in remote health data collection and evaluate usability and functionality of the proposed prototype. The evaluation of the prototype seems to show the feasibility of mobile technologies, particularly open source technologies, in improving the health data collection and reporting systems for the developing world.
234

An On-Road Investigation of Commercial Motor Vehicle Operators and Self-Rating of Alertness and Temporal Separation as Indicators of Driver Fatigue

Belz, Steven M. 29 November 2000 (has links)
This on-road field investigation employed, for the first time, a completely automated, trigger-based data collection system capable of evaluating driver performance in an extended duration real-world commercial motor vehicle environment. The complexities associated with the development of the system, both technological and logistical and the necessary modifications to the plan of research are presented herein This study, performed in conjunction with an on-going three year contract with the Federal Highway Administration, examined the use of self-rating of alertness and temporal separation (minimum time-to-collision, minimum headway, and mean headway) as indicators of driver fatigue. Without exception, the regression analyses for both the self-rating of alertness and temporal separation yielded models low in predictive ability; neither metric was found to be a valid indicator of driver fatigue. Various reasons for the failure of self-rating of fatigue as a valid measure are discussed. Dispersion in the data, likely due to extraneous (non-fatigue related) factors (e.g., other drivers) are credited with reducing the sensitivity of the temporal separation indicators. Overall fatigue levels for all temporal separation incidents (those with a time-to-collision equal to or less than four seconds) were found to be significantly higher than for those randomly triggered incidents. On this basis, it is surmised that temporal separation may be a sensitive indicator for time-to-collision values greater than the 4-second criterion employed in this study. Two unexpected relationships in the data are also discussed. A "wall" effect was found to exist for minimum time-to-collision values at 1.9 seconds. That is, none of the participants who participated in this research effort exhibited following behaviors with less than a 1.9-second time-to-collision criterion. In addition, based upon the data collected for this research, anecdotal evidence suggests that commercial motor vehicle operators do not appear to follow the standard progression of events associated with the onset of fatigue. / Ph. D.
235

Optimizing Information Freshness in Wireless Networks

Li, Chengzhang 18 January 2023 (has links)
Age of Information (AoI) is a performance metric that can be used to measure the freshness of information. Since its inception, it has captured the attention of the research community and is now an area of active research. By its definition, AoI measures the elapsed time period between the present time and the generation time of the information. AoI is fundamentally different from traditional metrics such as delay or latency as the latter only considers the transit time for a packet to traverse the network. Among the state-of-the-art in the literature, we identify two limitations that deserve further investigation. First, many existing efforts on AoI have been limited to information-theoretic exploration by considering extremely simple models and unrealistic assumptions, which are far from real-world communication systems. Second, among most existing work on scheduling algorithms to optimize AoI, there is a lack of research on guaranteeing AoI deadlines. The goal of this dissertation is to address these two limitations in the state-of-the-art. First, we design schedulers to minimize AoI under more practical settings, including varying sampling periods, varying sample sizes, cellular transmission models, dynamic channel conditions, etc. Second, we design schedulers to guarantee hard or soft AoI deadlines for each information source. More important, inspired by our results from guaranteeing AoI deadlines, we develop a general design framework that can be applied to construct high-performance schedulers for AoI-related problems. This dissertation is organized into three parts. In the first part, we study two problems on AoI minimization under general settings. (i) We consider general and heterogeneous sampling behaviors among source nodes, varying sample size, and a cellular-based transmission model. We develop a near-optimal low-complexity scheduler---code-named Juventas---to minimize AoI. (ii) We study the AoI minimization problem under a 5G network with dynamic channels. To meet the stringent real-time requirement for 5G, we develop a GPU-based near-optimal algorithm---code-named Kronos---and implement it on commercial off-the-shelf (COTS) GPUs. In the second part, we investigate three problems on guaranteeing AoI deadlines. (i) We study the problem to guarantee a hard AoI deadline for information from each source. We present a novel low-complexity procedure, called Fictitious Polynomial Mapping (FPM), and prove that FPM can find a feasible scheduler for any hard deadline vector when the system load is under ln 2. (ii) For soft AoI deadlines, i.e., occasional violations can be tolerated, we present a novel procedure called Unstable Tolerant Scheduler (UTS). UTS hinges upon the notions of Almost Uniform Schedulers (AUSs) and step-down rate vectors. We show that UTS has strong performance guarantees under different settings. (iii) We investigate a 5G scheduling problem to minimize the proportion of time when the AoI exceeds a soft deadline. We derive a property called uniform fairness and use it as a guideline to develop a 5G scheduler---Aequitas. To meet the real-time requirement in 5G, we implement Aequitas on a COTS GPU. In the third part, we present Eywa---a general design framework that can be applied to construct high-performance schedulers for AoI-related optimization and decision problems. The design of Eywa is inspired by the notions of AUS schedulers and step-down rate vectors when we develop UTS in the second part. To validate the efficacy of the proposed Eywa framework, we apply it to solve a number of problems, such as minimizing the sum of AoIs, minimizing bandwidth requirement under AoI constraints, and determining the existence of feasible schedulers to satisfy AoI constraints. We find that for each problem, Eywa can either offer a stronger performance guarantee than the state-of-the-art algorithms, or provide new/general results that are not available in the literature. / Doctor of Philosophy / Age of Information (AoI) is a performance metric that can be used to measure the freshness of information. It measures the elapsed time period between the present time and the generation time of the information. Through a literature review, we have identified two limitations: (i) many existing efforts on AoI have employed extremely simple models and unrealistic assumptions, and (ii) most existing work focuses on optimizing AoI, while overlooking AoI deadline requirements in some applications. The goal of this dissertation is to address these two limitations. For the first limitation, we study the problem to minimize the average AoI in general and practical settings, such as dynamic channels and 5G NR networks. For the second limitation, we design schedulers to guarantee hard or soft AoI deadlines for information from each source. Finally, we develop a general design framework that can be applied to construct high-performance schedulers for AoI-related problems.
236

The Importance of Data in RF Machine Learning

Clark IV, William Henry 17 November 2022 (has links)
While the toolset known as Machine Learning (ML) is not new, several of the tools available within the toolset have seen revitalization with improved hardware, and have been applied across several domains in the last two decades. Deep Neural Network (DNN) applications have contributed to significant research within Radio Frequency (RF) problems over the last decade, spurred by results in image and audio processing. Machine Learning (ML), and Deep Learning (DL) specifically, are driven by access to relevant data during the training phase of the application due to the learned feature sets that are derived from vast amounts of similar data. Despite this critical reliance on data, the literature provides insufficient answers on how to quantify the data training needs of an application in order to achieve a desired performance. This dissertation first aims to create a practical definition that bounds the problem space of Radio Frequency Machine Learning (RFML), which we take to mean the application of Machine Learning (ML) as close to the sampled baseband signal directly after digitization as is possible, while allowing for preprocessing when reasonably defined and justified. After constraining the problem to the Radio Frequency Machine Learning (RFML) domain space, an understanding of what kinds of Machine Learning (ML) have been applied as well as the techniques that have shown benefits will be reviewed from the literature. With the problem space defined and the trends in the literature examined, the next goal aims at providing a better understanding for the concept of data quality through quantification. This quantification helps explain how the quality of data: affects Machine Learning (ML) systems with regard to final performance, drives required data observation quantity within that space, and impacts can be generalized and contrasted. With the understanding of how data quality and quantity can affect the performance of a system in the Radio Frequency Machine Learning (RFML) space, an examination of the data generation techniques and realizations from conceptual through real-time hardware implementations are discussed. Consequently, the results of this dissertation provide a foundation for estimating the investment required to realize a performance goal within a Deep Learning (DL) framework as well as a rough order of magnitude for common goals within the Radio Frequency Machine Learning (RFML) problem space. / Doctor of Philosophy / Machine Learning (ML) is a powerful toolset capable of solving difficult problems across many domains. A fundamental part of this toolset is the representative data used to train a system. Unlike the domains of image or audio processing, for which datasets are constantly being developed thanks to usage agreements with entities such as Facebook, Google, and Amazon, the field of Machine Learning (ML) within the Radio Frequency (RF) domain, or Radio Frequency Machine Learning (RFML), does not have access to such crowdsourcing means of creating labeled datasets. Therefore data within the Radio Frequency Machine Learning (RFML) problem space must be intentionally cultivated to address the target problem. This dissertation explains the problem space of Radio Frequency Machine Learning (RFML) and then quantifies the effect of quality on data used during the training of Radio Frequency Machine Learning (RFML) systems. Taking this one step further, the work then goes on to provide a means of estimating data quantity needs to achieve high levels of performance based on the current Deep Learning (DL) approach to solve the problem, which in turn can be used as guidance to better refine the approach when the real-world data quantity requirements exceed practical acquisition levels. Finally, the problem of data generation is examined and provides context for the difficulties associated with procuring high quality data for problems in the Radio Frequency Machine Learning (RFML) space.
237

Design of a data acquisition system to control and monitor a velocity probe in a fluid flow field

Herwig, Nancy Lou January 1982 (has links)
A data acquisition system to control the position of a velocity probe, and to acquire digital voltages as an indication of fluid velocity is presented. This system replaces a similar manually operated traverse system, it relieves the operator of control and acquisition tasks, while providing a more consistent and systematic approach to the acquisition process. The design includes the TRS-80 microcomputer, with external interfacing accomplished using the STD-based bus design. / Master of Science
238

Real time data acquisition for load management

Ghosh, Sushmita 15 November 2013 (has links)
Demand for Data Transfer between computers has increased ever since the introduction of Personal Computers (PC). Data Communicating on the Personal Computer is much more productive as it is an intelligent terminal that can connect to various hosts on the same I/O hardware circuit as well as execute processes on its own as an isolated system. Yet, the PC on its own is useless for data communication. It requires a hardware interface circuit and software for controlling the handshaking signals and setting up communication parameters. Often the data is distorted due to noise in the line. Such transmission errors are imbedded in the data and require careful filtering. The thesis deals with the development of a Data Acquisition system that collects real time load and weather data and stores them as historical database for use in a load forecast algorithm in a load management system. A filtering technique has been developed here that checks for transmission errors in the raw data. The microcomputers used in this development are the IBM PC/XT and the AT&T 3B2 supermicro computer. / Master of Science
239

Hidden labour: The skilful work of clinical audit data collection and its implications for secondary use of data via integrated health IT

McVey, Lynn, Alvarado, Natasha, Greenhalgh, J., Elshehaly, Mai, Gale, C.P., Lake, J., Ruddle, R.A., Dowding, D., Mamas, M., Feltbower, R., Randell, Rebecca 26 July 2021 (has links)
Yes / Secondary use of data via integrated health information technology is fundamental to many healthcare policies and processes worldwide. However, repurposing data can be problematic and little research has been undertaken into the everyday practicalities of inter-system data sharing that helps explain why this is so, especially within (as opposed to between) organisations. In response, this article reports one of the most detailed empirical examinations undertaken to date of the work involved in repurposing healthcare data for National Clinical Audits. Methods: Fifty-four semi-structured, qualitative interviews were carried out with staff in five English National Health Service hospitals about their audit work, including 20 staff involved substantively with audit data collection. In addition, ethnographic observations took place on wards, in ‘back offices’ and meetings (102 hours). Findings were analysed thematically and synthesised in narratives. Results: Although data were available within hospital applications for secondary use in some audit fields, which could, in theory, have been auto-populated, in practice staff regularly negotiated multiple, unintegrated systems to generate audit records. This work was complex and skilful, and involved cross-checking and double data entry, often using paper forms, to assure data quality and inform quality improvements. Conclusions: If technology is to facilitate the secondary use of healthcare data, the skilled but largely hidden labour of those who collect and recontextualise those data must be recognised. Their detailed understandings of what it takes to produce high quality data in specific contexts should inform the further development of integrated systems within organisations.
240

Asset Management Data Collection for Supporting Decision Processes

Pantelias, Aristeidis 23 August 2005 (has links)
Transportation agencies engage in extensive data collection activities in order to support their decision processes at various levels. However, not all the data collected supply transportation officials with useful information for efficient and effective decision-making. This thesis presents research aimed at formally identifying links between data collection and the supported decision processes. The research objective identifies existing relationships between Asset Management data collection and the decision processes to be supported by them, particularly in the project selection level. It also proposes a framework for effective and efficient data collection. The motivation of the project was to help transportation agencies optimize their data collection processes and cut down data collection and management costs. The methodology used entailed two parts: a comprehensive literature review that collected information from various academic and industrial sources around the world (mostly from Europe, Australia and Canada) and the development of a web survey that was e-mailed to specific expert individuals within the 50 U.S. Departments of Transportation (DOTs) and Puerto Rico. The electronic questionnaire was designed to capture state officials' experience and practice on: asset management endorsement and implementation; data collection, management and integration; decision-making levels and decision processes; and identified relations between decision processes and data collection. The responses obtained from the web survey were analyzed statistically and combined with the additional resources in order to develop the proposed framework and recommendations. The results of this research are expected to help transportation agencies and organizations not only reduce costs in their data collection but also make more effective project selection decisions. / Master of Science

Page generated in 0.307 seconds