• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 61
  • 12
  • 5
  • 3
  • 3
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 112
  • 112
  • 23
  • 17
  • 16
  • 15
  • 12
  • 12
  • 12
  • 11
  • 10
  • 10
  • 10
  • 10
  • 10
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Instrumentation for the control of the formation of industrial particulate mixtures, and their real-time monitoring by ultrasound

Marshall, Thomas January 2000 (has links)
No description available.
2

Process monitoring and fault diagnosis methods using constraint suspension

Leary, Jerome J. January 1988 (has links)
No description available.
3

Multivariate Image Analysis for Real-Time Process Monitoring

Bharati, Manish 09 1900 (has links)
In today’s technically advanced society the collection and study of digital images has become an important aspect of various off-line applications that range from medical diagnosis to exploring the Martian surface for traces of water. Various industries have recently started moving towards vision based systems to monitor several of their manufacturing processes. Except for some simple on-line applications, these systems are primarily used to analyze the digital images off-line. This thesis is concerned with developing a more powerful on-line digital image analysis technique which links the fields of traditional digital image processing with a recently devised statistically based image analysis method called multivariate image analysis (MIA). The first part of the thesis introduces the area of traditional digital image processing techniques through a brief literature review of three of its five main classes (image enhancement, restoration, analysis, compression, & synthesis) which contain most of the commonly used operations in this area. This introduction is intended as a starting point for readers who have little background in this field, and as a means of providing sufficient details on these techniques so that they can be used in conjunction with other advanced MIA on-line monitoring operations. MIA of multispectral digital images using latent variable statistical methods (Multi-Way PCA / PLS) is the main topic covered by the second part of this thesis. After reviewing the basic theory of feature extraction using MIA for off-line analyses, a new technique is introduced that extends these ideas for image analyses in on-line applications. Instead of directly using the updated images themselves to monitor a time- varying process, this new technique uses the latent variable space of the image to monitor the increase or decline in the number of pixels belonging to various features of interest. The ability to switch between the images and their latent variable space then allows the user to determine the exact spatial locations of any features of interest. This new method is shown to be ideal for monitoring interesting features from time-varying processes equipped with multispectral sensors. It forms a basis for future on-line industrial process monitoring schemes in those industries that are moving towards automatic vision systems using multispectral digital imagery. / Thesis / Master of Engineering (ME)
4

Micro Molding Process Monitoring and Control

Whiteside, Benjamin R., Babenko, Maksims, Brown, Elaine C. 03 May 2019 (has links)
No
5

Multidimensional Visualization of Process Monitoring and Quality Assurance Data in High-Volume Discrete Manufacturing

Teets, Jay Marshall 12 March 2007 (has links)
Advances in microcomputing hardware and software over the last several years have resulted in personal computers with exceptional computational power and speed. As the costs associated with microcomputer hardware and software continue to decline, manufacturers have begun to implement numerous information technology components on the shop floor. Components such as microcomputer file servers and client workstations are replacing traditional (manual) methods of data collection and analysis since they can be used as a tool for real-time decision-making. Server-based and web-based shop floor data collection and monitoring software applications are able to collect vast amounts of data in a relatively short period of time. In addition, advances in telecommunications and computer interconnectivity allow for the remote access and sharing of this data for additional analysis. Rarely, however, does the method by which a manager reviews production and quality data keep pace with the large amount of data being collected and thus available for analysis. Visualization techniques that allow the decision maker to react quickly, such as the ability to view and manipulate vast amounts of data in real-time, may provide an alternative for operations managers and decision-makers. These techniques can be used to improve the communication between the manager using a microcomputer and the microcomputer itself through the use of computer-generated, domain-specific visualizations. This study explores the use of visualization tools and techniques applied to manufacturing systems as an aid in managerial decision-making. Numerous visual representations that support process and quality monitoring have been developed and presented for evaluation of process and product quality characteristics. These visual representations are based on quality assurance data and process monitoring data from a high-volume, discrete product manufacturer with considerable investment in both automated and intelligent processes and information technology components. A computer-based application was developed and used to display the visual representations that were then presented to a sample group of evaluators who evaluated them with respect to their ability to utilize them in making accurate and timely decisions about the processes being monitored. This study concludes with a summary of the results and provides a direction for future research efforts. / Ph. D.
6

Exploiting process topology for optimal process monitoring

Lindner, Brian Siegfried 12 1900 (has links)
Thesis (MEng) -- Stellenbosch University, 2014. / ENGLISH ABSTRACT: Modern mineral processing plants are characterised by a large number of measured variables, interacting through numerous processing units, control loops and often recycle streams. Consequentially, faults in these plants propagate throughout the system, causing significant degradation in performance. Fault diagnosis therefore forms an essential part of performance monitoring in such processes. The use of feature extraction methods for fault diagnosis has been proven in literature to be useful in application to chemical or minerals processes. However, the ability of these methods to identify the causes of the faults is limited to identifying variables that display symptoms of the fault. Since faults propagate throughout the system, these results can be misleading and further fault identification has to be applied. Faults propagate through the system along material, energy or information flow paths, therefore process topology information can be used to aid fault identification. Topology information can be used to separate the process into multiple blocks to be analysed separately for fault diagnosis; the change in topology caused by fault conditions can be exploited to identify symptom variables; a topology map of the process can be used to trace faults back from their symptoms to possible root causes. The aim of this project, therefore, was to develop a process monitoring strategy that exploits process topology for fault detection and identification. Three methods for extracting topology from historical process data were compared: linear cross-correlation (LC), partial cross-correlation (PC) and transfer entropy (TE). The connectivity graphs obtained from these methods were used to divide process into multiple blocks. Two feature extraction methods were then applied for fault detection: principal components analysis (PCA), a linear method, was compared with kernel PCA (KPCA), a nonlinear method. In addition, three types of monitoring chart methods were compared: Shewhart charts; exponentially weighted moving average (EWMA) charts; and cumulative sum (CUSUM) monitoring charts. Two methods for identifying symptom variables for fault identification were then compared: using contributions of individual variables to the PCA SPE; and considering the change in connectivity. The topology graphs were then used to trace faults to their root causes. It was found that topology information was useful for fault identification in most of the fault scenarios considered. However, the performance was inconsistent, being dependent on the accuracy of the topology extraction. It was also concluded that blocking using topology information substantially improved fault detection and fault identification performance. A recommended fault diagnosis strategy was presented based on the results obtained from application of all the fault diagnosis methods considered. / AFRIKAANSE OPSOMMING: Moderne mineraalprosesseringsaanlegte word gekarakteriseer deur ʼn groot aantal gemete veranderlikes, wat in wisselwerking tree met mekaar deur verskeie proseseenhede, beheerlusse en hersirkulasiestrome. As gevolg hiervan kan foute in aanlegte deur die hele sisteem propageer, wat prosesprestasie kan laat afneem. Foutdiagnose vorm dus ʼn noodsaaklike deel van prestasiemonitering. Volgens literatuur is die gebruik van kenmerkekstraksie metodes vir foutdiagnose nuttig in chemiese en mineraalprosesseringsaanlegte. Die vermoë van hierdie metodes om die fout te kan identifiseer is egter beperk tot die identifikasie van veranderlikes wat simptome van die fout vertoon. Aangesien foute deur die sisteem propageer kan resultate misleidend wees, en moet verdere foutidentifikasie metodes dus toegepas word. Foute propageer deur die proses deur materiaal-, energie- of inligtingvloeipaaie, daarom kan prosestopologie inligting gebruik word om foutidentifikasie te steun. Topologie inligting kan gebruik word om die proses in veelvoudige blokke te skei om die blokke apart te ontleed. Die verandering in topologie veroorsaak deur fouttoestande kan dan analiseer word om simptoomveranderlikes te identifiseer. ʼn Topologiekaart van die proses kan ontleed word om moontlike hoofoorsake van foute op te spoor. Die doel van hierdie projek was dus om ʼn prosesmoniteringstrategie te ontwikkel wat prosestopologie benut vir fout-opspooring en foutidentifikasie. Drie metodes vir topologie-ekstraksie van historiese prosesdata is met mekaar vergelyk: liniêre kruiskorrelasie, parsiële kruiskorrelasie en oordrag-entropie. Konnektiwiteitsgrafieke verkry deur hierdie ekstraksie-metodes is gebruik om die proses in veelvoudige blokke te skei. Twee kenmerkekstraksiemetodes is hierna toegepas om foutdeteksie te bewerkstellig: hoofkomponentanalise (HKA), ʼn liniêre metode; en kernhoofkomponentanalise (KHKA), ʼn nie-lineêre metode. Boonop was drie tipes moniteringskaart metodes vergelyk: Shewhart kaarte, eksponensieel-geweegde bewegende gemiddelde kaarte en kumulatiewe som kaarte. Twee metodes om simptoom veranderlikes te identifiseer vir foutidentifikasie was daarna vergelyk: gebruik van individuele veranderlikes; en inagneming van die verandering in konnektiwiteit. Die konnektiwiteitgrafieke was daarna gebruik om hoofoorsake van foute op te spoor. Dit is gevind dat topologie informasie nuttig was vir foutidentifikasie vir meeste van die fouttoestande ondersoek. Nogtans was die prestasie onsamehangend, aangesien dit afhanklik is van die akkuraatheid waarmee topologie ekstraksie uitgevoer is. Daar was ook afgelei dat die gebruik van topologie blokke beduidend die fout-opspooring en foutidentifikasie prestasie verbeter het. ʼn Aanbevole foutdiagnose strategie is voorgestel.
7

A Generic BI Application for Real-time Monitoring of Care Processes

Baffoe, Shirley A. 14 June 2013 (has links)
Patient wait times and care service times are key performance measures for care processes in hospitals. Managing the quality of care delivered by these processes in real-time is challenging. A key challenge is to correlate source medical events to infer the care process states that define patient wait times and care service times. Commercially available complex event processing engines do not have built in support for the concept of care process state. This makes it unnecessarily complex to define and maintain rules for inferring states from source medical events in a care process. Another challenge is how to present the data in a real-time BI dashboard and the underlying data model to use to support this BI dashboard. Data representation architecture can potentially lead to delays in processing and presenting the data in the BI dashboard. In this research, we have investigated the problem of real-time monitoring of care processes, performed a gap analysis of current information system support for it, researched and assessed available technologies, and shown how to most effectively leverage event driven and BI architectures when building information support for real-time monitoring of care processes. We introduce a state monitoring engine for inferring and managing states based on an application model for care process monitoring. A BI architecture is also leveraged for the data model to support the real-time data processing and reporting requirements of the application’s portal. The research is validated with a case study to create a real-time care process monitoring application for an Acute Coronary Syndrome (ACS) clinical pathway in collaboration with IBM and Osler hospital. The research methodology is based on design-oriented research.
8

A Generic BI Application for Real-time Monitoring of Care Processes

Baffoe, Shirley A. January 2013 (has links)
Patient wait times and care service times are key performance measures for care processes in hospitals. Managing the quality of care delivered by these processes in real-time is challenging. A key challenge is to correlate source medical events to infer the care process states that define patient wait times and care service times. Commercially available complex event processing engines do not have built in support for the concept of care process state. This makes it unnecessarily complex to define and maintain rules for inferring states from source medical events in a care process. Another challenge is how to present the data in a real-time BI dashboard and the underlying data model to use to support this BI dashboard. Data representation architecture can potentially lead to delays in processing and presenting the data in the BI dashboard. In this research, we have investigated the problem of real-time monitoring of care processes, performed a gap analysis of current information system support for it, researched and assessed available technologies, and shown how to most effectively leverage event driven and BI architectures when building information support for real-time monitoring of care processes. We introduce a state monitoring engine for inferring and managing states based on an application model for care process monitoring. A BI architecture is also leveraged for the data model to support the real-time data processing and reporting requirements of the application’s portal. The research is validated with a case study to create a real-time care process monitoring application for an Acute Coronary Syndrome (ACS) clinical pathway in collaboration with IBM and Osler hospital. The research methodology is based on design-oriented research.
9

Case and Activity Identification for Mining Process Models from Middleware

Bala, Saimir, Mendling, Jan, Schimak, Martin, Queteschiner, Peter 12 October 2018 (has links) (PDF)
Process monitoring aims to provide transparency over operational aspects of a business process. In practice, it is a challenge that traces of business process executions span across a number of diverse systems. It is cumbersome manual engineering work to identify which attributes in unstructured event data can serve as case and activity identifiers for extracting and monitoring the business process. Approaches from literature assume that these identifiers are known a priori and data is readily available in formats like eXtensible Event Stream (XES). However, in practice this is hardly the case, specifically when event data from different sources are pooled together in event stores. In this paper, we address this research gap by inferring potential case and activity identifiers in a provenance agnostic way. More specifically, we propose a semi-automatic technique for discovering event relations that are semantically relevant for business process monitoring. The results are evaluated in an industry case study with an international telecommunication provider.
10

Development of a Safeguards Monitoring System for Special Nuclear Facilities

Henkel, James Joseph 01 August 2011 (has links)
Two important issues related to nuclear materials safeguards are the continuous monitoring of nuclear processing facilities to verify that undeclared uranium is not processed or enriched and to verify that declared uranium is accounted for. The International Atomic Energy Agency (IAEA) is tasked with ensuring special nuclear facilities are operating as declared and that proper material safeguards have been followed. Traditional safeguards measures have relied on IAEA personnel inspecting each facility and verifying material with authenticated instrumentation. In newer facilities most plant instrumentation data are collected electronically and stored in a central computer. Facilities collect this information for a variety of reasons, most notably for process optimization and monitoring. The field of process monitoring has grown significantly over the past decades, and techniques have been developed to detect and identify changes and to improve reliability and safety. Several of these techniques can also be applied to international and domestic safeguards. This dissertation introduces a safeguards monitoring system developed for both a simulated Uranium blend down facility, and a water-processing facility at the Oak Ridge National Laboratory. For the simulated facility, a safeguards monitoring system is developed using an Auto-Associative Kernel Regression model, and the effects of incorporating facility specific radiation sensors and preprocessing the data are examined. The best safeguards model was able to detect diversions as small as 1.1%. For the ORNL facility, a load cell monitoring system was developed. This monitoring system provides an inspector with an efficient way to identify undeclared activity and to identify atypical facility operation, included diversions as small as 0.1 kg. The system also provides a foundation for an on-line safeguards monitoring approach where inspectors remotely facility data to draw safeguards conclusion, possibly reducing the needed frequency and duration of a traditional inspection.

Page generated in 0.1279 seconds