• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 61
  • 12
  • 5
  • 3
  • 3
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 112
  • 112
  • 23
  • 17
  • 16
  • 15
  • 12
  • 12
  • 12
  • 11
  • 10
  • 10
  • 10
  • 10
  • 10
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Novel strategies for design of high temperature titania-based gas sensors for combustion process monitoring

Frank, Marla Lea 06 November 2003 (has links)
No description available.
22

Advanced Data Analytics for Quality Assurance of Smart Additive Manufacturing

Shen, Bo 07 July 2022 (has links)
Additive manufacturing (AM) is a powerful emerging technology for fabricating components with complex geometries using a variety of materials. However, despite the promising potential, due to the complexity of the process dynamics, how to ensure product quality and consistency of AM parts efficiently during the process remains challenging. Therefore, this dissertation aims to develop advanced machine learning methods for online process monitoring and quality assurance of smart additive manufacturing. Driven by edge computing, the Industrial Internet of Things (IIoT), sensors and other smart technologies, data collection, communication, analytics, and control are infiltrating every aspect of manufacturing. The data provides excellent opportunities to improve and revolutionize manufacturing for both quality and productivity. Despite the massive volume of data generated during a very short time, approximately 90 percent of data gets wasted or unused. The goal of sensing and data analytics for advanced manufacturing is to capture the full insight that data and analytics can discover to help address the most pressing problems. To achieve the above goal, several data-driven approaches have been developed in this dissertation to achieve effective data preprocessing, feature extraction, and inverse design. We also develop related theories for these data-driven approaches to guarantee their performance. The performances have been validated using sensor data from AM processes. Specifically, four new methodologies are proposed and implemented as listed below: 1. To make the unqualified thermal data meet the spatial and temporal resolution requirement of microstructure prediction, a super resolution for multi-sources image stream data using smooth and sparse tensor completion is proposed and applied to data acquisition of additive manufacturing. The qualified thermal data is able to extract useful information like boundary velocity, thermal gradient, etc. 2. To effectively extract features for high dimensional data with limited samples, a clustered discriminant regression is created for classification problems in healthcare and additive manufacturing. The proposed feature extraction method together with classic classifiers can achieve better classification performance than the convolutional neural network for image classification. 3. To extract the melt pool information from the processed X-ray video in metal AM process, a smooth sparse Robust Tensor Decomposition model is devised to decompose the data into the static background, smooth foreground, and noise, respectively. The proposed method exhibits superior performance in extracting the melt pool information on X-ray data. 4. To learn the material property for different printing settings, a multi-task Gaussian process upper confidence bound is developed for the sequential experiment design, where a no-regret algorithm is implemented. The proposed algorithm aims to learn the optimal material property for different printing settings. By fully utilizing the sensor data with innovative data analytics, the above-proposed methodologies are used to perform interdisciplinary research, promote technical innovations, and achieve balanced theoretical/practical advancements. In addition, these methodologies are inherently integrated into a generic framework. Thus, they can be easily extended to other manufacturing processes, systems, or even other application areas such as healthcare systems. / Doctor of Philosophy / Additive manufacturing (AM) technology is rapidly changing the industry, and data from various sensors and simulation software can further improve AM product quality. The objective of this dissertation is to develop methodologies for process monitoring and quality assurance using advanced data analytics. In this dissertation, four new methodologies are developed to address the problems of unqualified data, high dimensional data with limited samples, and inverse design. Related theories are also studied to identify the conditions by which the performance of the developed methodologies can be guaranteed. To validate the effectiveness and efficiency of proposed methodologies, various data sets from sensors and simulation software are used for testing and validation. The results demonstrate that the proposed methods are promising for different AM applications. The future applications of the accomplished work in this dissertation are not just limited to AM. The developed methodologies can be easily transferred for applications in other domains such as healthcare, computer vision, etc.
23

An investigation on automatic systems for fault diagnosis in chemical processes

Monroy Chora, Isaac 03 February 2012 (has links)
Plant safety is the most important concern of chemical industries. Process faults can cause economic loses as well as human and environmental damages. Most of the operational faults are normally considered in the process design phase by applying methodologies such as Hazard and Operability Analysis (HAZOP). However, it should be expected that failures may occur in an operating plant. For this reason, it is of paramount importance that plant operators can promptly detect and diagnose such faults in order to take the appropriate corrective actions. In addition, preventive maintenance needs to be considered in order to increase plant safety. Fault diagnosis has been faced with both analytic and data-based models and using several techniques and algorithms. However, there is not yet a general fault diagnosis framework that joins detection and diagnosis of faults, either registered or non-registered in records. Even more, less efforts have been focused to automate and implement the reported approaches in real practice. According to this background, this thesis proposes a general framework for data-driven Fault Detection and Diagnosis (FDD), applicable and susceptible to be automated in any industrial scenario in order to hold the plant safety. Thus, the main requirement for constructing this system is the existence of historical process data. In this sense, promising methods imported from the Machine Learning field are introduced as fault diagnosis methods. The learning algorithms, used as diagnosis methods, have proved to be capable to diagnose not only the modeled faults, but also novel faults. Furthermore, Risk-Based Maintenance (RBM) techniques, widely used in petrochemical industry, are proposed to be applied as part of the preventive maintenance in all industry sectors. The proposed FDD system together with an appropriate preventive maintenance program would represent a potential plant safety program to be implemented. Thus, chapter one presents a general introduction to the thesis topic, as well as the motivation and scope. Then, chapter two reviews the state of the art of the related fields. Fault detection and diagnosis methods found in literature are reviewed. In this sense a taxonomy that joins both Artificial Intelligence (AI) and Process Systems Engineering (PSE) classifications is proposed. The fault diagnosis assessment with performance indices is also reviewed. Moreover, it is exposed the state of the art corresponding to Risk Analysis (RA) as a tool for taking corrective actions to faults and the Maintenance Management for the preventive actions. Finally, the benchmark case studies against which FDD research is commonly validated are examined in this chapter. The second part of the thesis, integrated by chapters three to six, addresses the methods applied during the research work. Chapter three deals with the data pre-processing, chapter four with the feature processing stage and chapter five with the diagnosis algorithms. On the other hand, chapter six introduces the Risk-Based Maintenance techniques for addressing the plant preventive maintenance. The third part includes chapter seven, which constitutes the core of the thesis. In this chapter the proposed general FD system is outlined, divided in three steps: diagnosis model construction, model validation and on-line application. This scheme includes a fault detection module and an Anomaly Detection (AD) methodology for the detection of novel faults. Furthermore, several approaches are derived from this general scheme for continuous and batch processes. The fourth part of the thesis presents the validation of the approaches. Specifically, chapter eight presents the validation of the proposed approaches in continuous processes and chapter nine the validation of batch process approaches. Chapter ten raises the AD methodology in real scaled batch processes. First, the methodology is applied to a lab heat exchanger and then it is applied to a Photo-Fenton pilot plant, which corroborates its potential and success in real practice. Finally, the fifth part, including chapter eleven, is dedicated to stress the final conclusions and the main contributions of the thesis. Also, the scientific production achieved during the research period is listed and prospects on further work are envisaged. / La seguridad de planta es el problema más inquietante para las industrias químicas. Un fallo en planta puede causar pérdidas económicas y daños humanos y al medio ambiente. La mayoría de los fallos operacionales son previstos en la etapa de diseño de un proceso mediante la aplicación de técnicas de Análisis de Riesgos y de Operabilidad (HAZOP). Sin embargo, existe la probabilidad de que pueda originarse un fallo en una planta en operación. Por esta razón, es de suma importancia que una planta pueda detectar y diagnosticar fallos en el proceso y tomar las medidas correctoras adecuadas para mitigar los efectos del fallo y evitar lamentables consecuencias. Es entonces también importante el mantenimiento preventivo para aumentar la seguridad y prevenir la ocurrencia de fallos. La diagnosis de fallos ha sido abordada tanto con modelos analíticos como con modelos basados en datos y usando varios tipos de técnicas y algoritmos. Sin embargo, hasta ahora no existe la propuesta de un sistema general de seguridad en planta que combine detección y diagnosis de fallos ya sea registrados o no registrados anteriormente. Menos aún se han reportado metodologías que puedan ser automatizadas e implementadas en la práctica real. Con la finalidad de abordar el problema de la seguridad en plantas químicas, esta tesis propone un sistema general para la detección y diagnosis de fallos capaz de implementarse de forma automatizada en cualquier industria. El principal requerimiento para la construcción de este sistema es la existencia de datos históricos de planta sin previo filtrado. En este sentido, diferentes métodos basados en datos son aplicados como métodos de diagnosis de fallos, principalmente aquellos importados del campo de “Aprendizaje Automático”. Estas técnicas de aprendizaje han resultado ser capaces de detectar y diagnosticar no sólo los fallos modelados o “aprendidos”, sino también nuevos fallos no incluidos en los modelos de diagnosis. Aunado a esto, algunas técnicas de mantenimiento basadas en riesgo (RBM) que son ampliamente usadas en la industria petroquímica, son también propuestas para su aplicación en el resto de sectores industriales como parte del mantenimiento preventivo. En conclusión, se propone implementar en un futuro no lejano un programa general de seguridad de planta que incluya el sistema de detección y diagnosis de fallos propuesto junto con un adecuado programa de mantenimiento preventivo. Desglosando el contenido de la tesis, el capítulo uno presenta una introducción general al tema de esta tesis, así como también la motivación generada para su desarrollo y el alcance delimitado. El capítulo dos expone el estado del arte de las áreas relacionadas al tema de tesis. De esta forma, los métodos de detección y diagnosis de fallos encontrados en la literatura son examinados en este capítulo. Asimismo, se propone una taxonomía de los métodos de diagnosis que unifica las clasificaciones propuestas en el área de Inteligencia Artificial y de Ingeniería de procesos. En consecuencia, se examina también la evaluación del performance de los métodos de diagnosis en la literatura. Además, en este capítulo se revisa y reporta el estado del arte correspondiente al “Análisis de Riesgos” y a la “Gestión del Mantenimiento” como técnicas complementarias para la toma de medidas correctoras y preventivas. Por último se abordan los casos de estudio considerados como puntos de referencia en el campo de investigación para la aplicación del sistema propuesto. La tercera parte incluye el capítulo siete, el cual constituye el corazón de la tesis. En este capítulo se presenta el esquema o sistema general de diagnosis de fallos propuesto. El sistema es dividido en tres partes: construcción de los modelos de diagnosis, validación de los modelos y aplicación on-line. Además incluye un modulo de detección de fallos previo a la diagnosis y una metodología de detección de anomalías para la detección de nuevos fallos. Por último, de este sistema se desglosan varias metodologías para procesos continuos y por lote. La cuarta parte de esta tesis presenta la validación de las metodologías propuestas. Específicamente, el capítulo ocho presenta la validación de las metodologías propuestas para su aplicación en procesos continuos y el capítulo nueve presenta la validación de las metodologías correspondientes a los procesos por lote. El capítulo diez valida la metodología de detección de anomalías en procesos por lote reales. Primero es aplicada a un intercambiador de calor escala laboratorio y después su aplicación es escalada a un proceso Foto-Fenton de planta piloto, lo cual corrobora el potencial y éxito de la metodología en la práctica real. Finalmente, la quinta parte de esta tesis, compuesta por el capítulo once, es dedicada a presentar y reafirmar las conclusiones finales y las principales contribuciones de la tesis. Además, se plantean las líneas de investigación futuras y se lista el trabajo desarrollado y presentado durante el periodo de investigación.
24

Monitoring and diagnosis of process systems using kernel-based learning methods

Jemwa, Gorden Takawadiyi 12 1900 (has links)
Thesis (PhD (Process Engineering))--University of Stellenbosch, 2007. / Dissertation presented for the degree of Doctor of Philosophy in Engineering at the University of Stellenbosch. / ENGLISH ABSTRACT: The development of advanced methods of process monitoring, diagnosis, and control has been identified as a major 21st century challenge in control systems research and application. This is particularly the case for chemical and metallurgical operations owing to the lack of expressive fundamental models as well as the nonlinear nature of most process systems, which makes established linearization methods unsuitable. As a result, efforts have been directed in the search of alternative approaches that do not require fundamental or analytical models. Data-based methods provide a very promising alternative in this regard, given the huge volumes of data being collected in modern process operations as well as advances in both theoretical and practical aspects of extracting information from observations. In this thesis, the use of kernel-based learning methods in fault detection and diagnosis of complex processes is considered. Kernel-based machine learning methods are a robust family of algorithms founded on insights from statistical learning theory. Instead of estimating a decision function on the basis of minimizing the training error as other learning algorithms, kernel methods use a criterion called large margin maximization to estimate a linear learning rule on data embedded in a suitable feature space. The embedding is implicitly defined by the choice of a kernel function and corresponds to inducing a nonlinear learning rule in the original measurement space. Large margin maximization corresponds to developing an algorithm with theoretical guarantees on how well it will perform on unseen data. In the first contribution, the characterization of time series data from process plants is investigated. Whereas complex processes are difficult to model from first principles, they can be identified using historic process time series data and a suitable model structure. However, prior to fitting such a model, it is important to establish whether the time series data justify the selected model structure. Singular spectrum analysis (SSA) has been used for time series identification. A nonlinear extension of SSA is proposed for classification of time series. Using benchmark systems, the proposed extension is shown to perform better than linear SSA. Moreover, the method is shown to be useful for filtering noise in time series data and, therefore, has potential applications in other tasks such as data rectification and gross error detection. Multivariate statistical process monitoring methods are well-established techniques for efficient information extraction from multivariate data. Such information is usually compact and amenable to graphical representation in two or three dimensional plots. For process monitoring purposes control limits are also plotted on these charts. These control limits are usually based on a hypothesized analytical distribution, typically the Gaussian normal distribution. A robust approach for estimating con dence bounds using the reference data is proposed. The method is based on one-class classification methods. The usefulness of using data to define a confidence bound in reducing fault detection errors is illustrated using plant data. The use of both linear and nonlinear supervised feature extraction is also investigated. The advantages of supervised feature extraction using kernel methods are highlighted via illustrative case studies. A general strategy for fault detection and diagnosis is proposed that integrates feature extraction methods, fault identification, and different methods to estimate confidence bounds. For kernel-based approaches, the general framework allows for interpretation of the results in the input space instead of the feature space. An important step in process monitoring is identifying a variable responsible for a fault. Although all faults that can occur at any plant cannot be known beforehand, it is possible to use knowledge of previous faults or simulations to anticipate their recurrence. A framework for fault diagnosis using one-class support vector machine (SVM) classification is proposed. Compared to other previously studied techniques, the one-class SVM approach is shown to have generally better robustness and performance characteristics. Most methods for process monitoring make little use of data collected under normal operating conditions, whereas most quality issues in process plants are known to occur when the process is in-control . In the final contribution, a methodology for continuous optimization of process performance is proposed that combines support vector learning with decision trees. The methodology is based on continuous search for quality improvements by challenging the normal operating condition regions established via statistical control. Simulated and plant data are used to illustrate the approach. / AFRIKAANSE OPSOMMING: Die ontwikkeling van gevorderde metodes van prosesmonitering, diagnose en -beheer is geïdentifiseer as 'n groot 21ste eeuse uitdaging in die navorsing en toepassing van beheerstelsels. Dit is veral die geval in die chemiese en metallurgiese bedryf, a.g.v. die gebrek aan fundamentele modelle, sowel as die nielineêre aard van meeste prosesstelsels, wat gevestigde benaderings tot linearisasie ongeskik maak. Die gevolg is dat pogings aangewend word om te soek na alternatiewe benaderings wat nie fundamentele of analitiese modelle benodig nie. Data-gebaseerde metodes voorsien belowende alternatiewe in dié verband, gegewe die enorme volumes data wat in moderne prosesaanlegte geberg word, sowel as die vooruitgang wat gemaak word in beide die teoretiese en praktiese aspekte van die onttrekking van inligting uit waarnemings. In die tesis word die gebruik van kern-gebaseerde metodes vir foutopsporing en -diagnose van komplekse prosesse beskou. Kern-gebaseerde masjienleermetodes is 'n robuuste familie van metodes gefundeer op insigte uit statistiese leerteorie. Instede daarvan om 'n besluitnemingsfunksie te beraam deur passingsfoute op verwysingsdata te minimeer, soos wat gedoen word met ander leermetodes, gebruik kern-metodes 'n kriterium genaamd groot marge maksimering om lineêre reëls te pas op data wat ingebed is in 'n geskikte kenmerkruimte. Die inbedding word implisiet gedefinieer deur die keuse van die kern-funksie en stem ooreen met die indusering van 'n nielineêre reël in die oorspronklike meetruimte. Groot marge-maksimering stem ooreen met die ontwikkeling van algoritmes waarvan die prestasie t.o.v. die passing van nuwe data teoreties gewaarborg is. In die eerste bydrae word die karakterisering van tydreeksdata van prosesaanlegte ondersoek. Alhoewel komplekse prosesse moeilik is om vanaf eerste beginsels te modelleer, kan hulle geïdentifiseer word uit historiese tydreeksdata en geskikte modelstrukture. Voor so 'n model gepas word, is dit belangrik om vas te stel of die tydreeksdata wel die geselekteerde modelstruktuur ondersteun. 'n Nielineêre uitbreiding van singuliere spektrale analise (SSA) is voorgestel vir die klassifikasie van tydreekse. Deur gebruik te maak van geykte stelsels, is aangetoon dat die voorgestelde uitbreiding beter presteer as lineêre SSA. Tewens, daar word ook aangetoon dat die metode nuttig is vir die verwydering van geraas in tydreeksdata en daarom ook potensiële toepassings het in ander take, soos datarektifikasie en die opsporing van sistematiese foute in data. Meerveranderlike statistiese prosesmonitering is goed gevestig vir die doeltreffende onttrekking van inligting uit meerveranderlike data. Sulke inligting is gewoonlik kompak en geskik vir voorstelling in twee- of drie-dimensionele grafieke. Vir die doeleindes van prosesmonitering word beheerlimiete dikwels op sulke grafieke aangestip. Hierdie beheerlimiete word gewoonlik gebaseer op 'n hipotetiese analitiese verspreiding van die data, tipiese gebaseer op 'n Gaussiaanse model. 'n Robuuste benadering vir die beraming van betroubaarheidslimiete gebaseer op verwysingsdata, word in die tesis voorgestel. Die metode is gebaseer op eenklas-klassifikasie en die nut daarvan deur data te gebruik om die betroubaarheidsgrense te beraam ten einde foutopsporing te optimeer, word geïllustreer aan die hand van aanlegdata. Die gebruik van beide lineêre en nielineêre oorsiggedrewe kenmerkonttrekking is vervolgens ondersoek. Die voordele van oorsiggedrewe kenmerkonttrekking deur van kern-metodes gebruik te maak is beklemtoon deur middel van illustratiewe gevallestudies. 'n Algemene strategie vir foutopsporing en -diagnose word voorgestel, wat kenmerkonttrekkingsmetodes, foutidenti kasie en verskillende metodes om betroubaarheidsgrense te beraam saamsnoer. Vir kern-gebaseerde metodes laat die algemene raamwerk toe dat die resultate in die invoerruimte vertolk kan word, in plaas van in die kenmerkruimte. 'n Belangrike stap in prosesmonitering is om veranderlikes te identifiseer wat verantwoordelik is vir foute. Alhoewel alle foute wat by 'n chemiese aanleg kan plaasvind, nie vooraf bekend kan wees nie, is dit moontlik om kennis van vorige foute of simulasies te gebruik om die herhaalde voorkoms van die foute te antisipeer. 'n Raamwerk vir foutdiagnose wat van eenklas-steunvektormasjiene (SVM) gebruik maak is voorgestel. Vergeleke met ander tegnieke wat voorheen bestudeer is, is aangetoon dat die eenklas-SVM benadering oor die algemeen beter robuustheid en prestasiekenmerke het. Meeste metodes vir prosesmonitering maak min gebruik van data wat opgeneem is onder normale bedryfstoestande, alhoewel meeste kwaliteitsprobleme ondervind word waneer die proses onder beheer is. In die laaste bydrae, is 'n metodologie vir die kontinue optimering van prosesprestasie voorgestel, wat steunvektormasjiene en beslissingsbome kombineer. Die metodologie is gebaseer op die kontinue soeke na kwaliteitsverbeteringe deur die normale bedryfstoestandsgrense, soos bepaal deur statistiese beheer, te toets. Gesimuleerde en werklike aanlegdata is gebruik om die benadering te illustreer.
25

Process monitoring and fault diagnosis using random forests

Auret, Lidia 12 1900 (has links)
Thesis (PhD (Process Engineering))--University of Stellenbosch, 2010. / Dissertation presented for the Degree of DOCTOR OF PHILOSOPHY (Extractive Metallurgical Engineering) in the Department of Process Engineering at the University of Stellenbosch / ENGLISH ABSTRACT: Fault diagnosis is an important component of process monitoring, relevant in the greater context of developing safer, cleaner and more cost efficient processes. Data-driven unsupervised (or feature extractive) approaches to fault diagnosis exploit the many measurements available on modern plants. Certain current unsupervised approaches are hampered by their linearity assumptions, motivating the investigation of nonlinear methods. The diversity of data structures also motivates the investigation of novel feature extraction methodologies in process monitoring. Random forests are recently proposed statistical inference tools, deriving their predictive accuracy from the nonlinear nature of their constituent decision tree members and the power of ensembles. Random forest committees provide more than just predictions; model information on data proximities can be exploited to provide random forest features. Variable importance measures show which variables are closely associated with a chosen response variable, while partial dependencies indicate the relation of important variables to said response variable. The purpose of this study was therefore to investigate the feasibility of a new unsupervised method based on random forests as a potentially viable contender in the process monitoring statistical tool family. The hypothesis investigated was that unsupervised process monitoring and fault diagnosis can be improved by using features extracted from data with random forests, with further interpretation of fault conditions aided by random forest tools. The experimental results presented in this work support this hypothesis. An initial study was performed to assess the quality of random forest features. Random forest features were shown to be generally difficult to interpret in terms of geometry present in the original variable space. Random forest mapping and demapping models were shown to be very accurate on training data, and to extrapolate weakly to unseen data that do not fall within regions populated by training data. Random forest feature extraction was applied to unsupervised fault diagnosis for process data, and compared to linear and nonlinear methods. Random forest results were comparable to existing techniques, with the majority of random forest detections due to variable reconstruction errors. Further investigation revealed that the residual detection success of random forests originates from the constrained responses and poor generalization artifacts of decision trees. Random forest variable importance measures and partial dependencies were incorporated in a visualization tool to allow for the interpretation of fault conditions. A dynamic change point detection application with random forests proved more successful than an existing principal component analysis-based approach, with the success of the random forest method again residing in reconstruction errors. The addition of random forest fault diagnosis and change point detection algorithms to a suite of abnormal event detection techniques is recommended. The distance-to-model diagnostic based on random forest mapping and demapping proved successful in this work, and the theoretical understanding gained supports the application of this method to further data sets. / AFRIKAANSE OPSOMMING: Foutdiagnose is ’n belangrike komponent van prosesmonitering, en is relevant binne die groter konteks van die ontwikkeling van veiliger, skoner en meer koste-effektiewe prosesse. Data-gedrewe toesigvrye of kenmerkekstraksie-benaderings tot foutdiagnose benut die vele metings wat op moderne prosesaanlegte beskikbaar is. Party van die huidige toesigvrye benaderings word deur aannames rakende liniariteit belemmer, wat as motivering dien om nie-liniêre metodes te ondersoek. Die diversiteit van datastrukture is ook verdere motivering vir ondersoek na nuwe kenmerkekstraksiemetodes in prosesmonitering. Lukrake-woude is ’n nuwe statistiese inferensie-tegniek, waarvan die akkuraatheid toegeskryf kan word aan die nie-liniêre aard van besluitnemingsboomlede en die bekwaamheid van ensembles. Lukrake-woudkomitees verskaf meer as net voorspellings; modelinligting oor datapuntnabyheid kan benut word om lukrakewoudkenmerke te verskaf. Metingbelangrikheidsaanduiers wys watter metings in ’n noue verhouding met ’n gekose uitsetveranderlike verkeer, terwyl parsiële afhanklikhede aandui wat die verhouding van ’n belangrike meting tot die gekose uitsetveranderlike is. Die doel van hierdie studie was dus om die uitvoerbaarheid van ’n nuwe toesigvrye metode vir prosesmonitering gebaseer op lukrake-woude te ondersoek. Die ondersoekte hipotese lui: toesigvrye prosesmonitering en foutdiagnose kan verbeter word deur kenmerke te gebruik wat met lukrake-woude geëkstraheer is, waar die verdere interpretasie van foutkondisies deur addisionele lukrake-woude-tegnieke bygestaan word. Eksperimentele resultate wat in hierdie werkstuk voorgelê is, ondersteun hierdie hipotese. ’n Intreestudie is gedoen om die gehalte van lukrake-woudkenmerke te assesseer. Daar is bevind dat dit moeilik is om lukrake-woudkenmerke in terme van die geometrie van die oorspronklike metingspasie te interpreteer. Verder is daar bevind dat lukrake-woudkartering en -dekartering baie akkuraat is vir opleidingsdata, maar dat dit swak ekstrapolasie-eienskappe toon vir ongesiene data wat in gebiede buite dié van die opleidingsdata val. Lukrake-woudkenmerkekstraksie is in toesigvrye-foutdiagnose vir gestadigde-toestandprosesse toegepas, en is met liniêre en nie-liniêre metodes vergelyk. Resultate met lukrake-woude is vergelykbaar met dié van bestaande metodes, en die meerderheid lukrake-woudopsporings is aan metingrekonstruksiefoute toe te skryf. Verdere ondersoek het getoon dat die sukses van res-opsporing op die beperkte uitsetwaardes en swak veralgemenende eienskappe van besluitnemingsbome berus. Lukrake-woude-metingbelangrikheidsaanduiers en parsiële afhanklikhede is ingelyf in ’n visualiseringstegniek wat vir die interpretasie van foutkondisies voorsiening maak. ’n Dinamiese aanwending van veranderingspuntopsporing met lukrake-woude is as meer suksesvol bewys as ’n bestaande metode gebaseer op hoofkomponentanalise. Die sukses van die lukrake-woudmetode is weereens aan rekonstruksie-reswaardes toe te skryf. ’n Voorstel wat na aanleiding van hierde studie gemaak is, is dat die lukrake-woudveranderingspunt- en foutopsporingsmetodes by ’n soortgelyke stel metodes gevoeg kan word. Daar is in hierdie werk bevind dat die afstand-vanaf-modeldiagnostiek gebaseer op lukrake-woudkartering en -dekartering suksesvol is vir foutopsporing. Die teoretiese begrippe wat ontsluier is, ondersteun die toepassing van hierdie metodes op verdere datastelle.
26

Change-point detection in dynamical systems using auto-associative neural networks

Bulunga, Meshack Linda 03 1900 (has links)
Thesis (MScEng)--Stellenbosch University, 2012. / ENGLISH ABSTRACT: In this research work, auto-associative neural networks have been used for changepoint detection. This is a nonlinear technique that employs the use of artificial neural networks as inspired among other by Frank Rosenblatt’s linear perceptron algorithm for classification. An auto-associative neural network was used successfully to detect change-points for various types of time series data. Its performance was compared to that of singular spectrum analysis developed by Moskvina and Zhigljavsky. Fraction of Explained Variance (FEV) was also used to compare the performance of the two methods. FEV indicators are similar to the eigenvalues of the covariance matrix in principal component analysis. Two types of time series data were used for change-point detection: Gaussian data series and nonlinear reaction data series. The Gaussian data had four series with different types of change-points, namely a change in the mean value of the time series (T1), a change in the variance of the time series (T2), a change in the autocorrelation of the time series (T3), and a change in the crosscorrelation of two time series (T4). Both linear and nonlinear methods were able to detect the changes for T1, T2 and T4. None of them could detect the changes in T3. With the Gaussian data series, linear singular spectrum analysis (LSSA) performed as well as the NLSSA for the change point detection. This is because the time series was linear and the nonlinearity of the NLSSA was therefore not important. LSSA did even better than NLSSA when comparing FEV values, since it is not subject to suboptimal solutions as could sometimes be the case with autoassociative neural networks. The nonlinear data consisted of the Belousov-Zhabotinsky (BZ) reactions, autocatalytic reaction time series data and data representing a predator-prey system. With the NLSSA methods, change points could be detected accurately in all three systems, while LSSA only managed to detect the change-point on the BZ reactions and the predator-prey system. The NLSSA method also fared better than the LSSA method when comparing FEV values for the BZ reactions. The LSSA method was able to model the autocatalytic reactions fairly accurately, being able to explain 99% of the variance in the data with one component only. NLSSA with two nodes on the bottleneck attained an FEV of 87%. The performance of both NLSSA and LSSA were comparable for the predator-prey system, both systems, where both could attain FEV values of 92% with a single component. An auto-associative neural network is a good technique for change point detection in nonlinear time series data. However, it offers no advantage over linear techniques when the time series data are linear. / AFRIKAANSE OPSOMMING: In hierdie navorsing is outoassosiatiewe neurale netwerk gebruik vir veranderingspuntwaarneming. Dis is ‘n nielineêre tegniek wat neurale netwerke gebruik soos onder andere geïnspireer deur Frank Rosnblatt se lineêre perseptronalgoritme vir klassifikasie. ‘n Outoassosiatiewe neurale netwerk is suksesvol gebruik om veranderingspunte op te spoor in verskeie tipes tydreeksdata. Die prestasie van die outoassosiatiewe neurale netwerk is vergelyk met singuliere spektrale oontleding soos ontwikkel deur Moskvina en Zhigljavsky. Die fraksie van die verklaarde variansie (FEV) is ook gebruik om die prestasie van die twee metodes te vergelyk. FEV indikatore is soortgelyk aan die eiewaardes van die kovariansiematriks in hoofkomponentontleding. Twee tipes tydreeksdata is gebruik vir veranderingspuntopsporing: Gaussiaanse tydreekse en nielineêre reaksiedatareekse. Die Gaussiaanse data het vier reekse gehad met verskillende veranderingspunte, naamlik ‘n verandering in die gemiddelde van die tydreeksdata (T1), ‘n verandering in die variansie van die tydreeksdata (T2), ‘n verandering in die outokorrelasie van die tydreeksdata (T3), en ‘n verandering in die kruiskorrelasie van twee tydreekse (T4). Beide lineêre en nielineêre metodes kon die veranderinge in T1, T2 en T4 opspoor. Nie een het egter daarin geslaag om die verandering in T3 op te spoor nie. Met die Gaussiaanse tydreeks het lineêre singuliere spektrumanalise (LSSA) net so goed gevaar soos die outoassosiatiewe neurale netwerk of nielineêre singuliere spektrumanalise (NLSSA), aangesien die tydreekse lineêr was en die vermoë van die NLSSA metode om nielineêre gedrag te identifiseer dus nie belangrik was nie. Inteendeel, die LSSA metode het ‘n groter FEV waarde getoon as die NLSSA metode, omdat LSSA ook nie blootgestel is aan suboptimale oplossings, soos wat soms die geval kan wees met die afrigting van die outoassosiatiewe neural netwerk nie. Die nielineêre data het bestaan uit die Belousov-Zhabotinsky (BZ) reaksiedata, ‘n outokatalitiese reaksietydreeksdata en data wat ‘n roofdier-prooistelsel verteenwoordig het. Met die NLSSA metode kon veranderingspunte betroubaar opgespoor word in al drie tydreekse, terwyl die LSSA metode net die veranderingspuntin die BZ reaksie en die roofdier-prooistelsel kon opspoor. Die NLSSA metode het ook beter gevaaar as die LSSA metode wanneer die FEV waardes vir die BZ reaksies vergelyk word. Die LSSA metode kon die outokatalitiese reaksies redelik akkuraat modelleer, en kon met slegs een komponent 99% van die variansie in die data verklaar. Die NLSSA metode, met twee nodes in sy bottelneklaag, kon ‘n FEV waarde van slegs 87% behaal. Die prestasie van beide metodes was vergelykbaar vir die roofdier-prooidata, met beide wat FEV waardes van 92% kon behaal met hulle een komponent. ‘n Outoassosiatiewe neural netwerk is ‘n goeie metode vir die opspoor van veranderingspunte in nielineêre tydreeksdata. Dit hou egter geen voordeel in wanneer die data lineêr is nie.
27

Etude des mécanismes de déformation et de transformations métallurgiques en sous-couche et sur la surface usinée lors du tournage du Ti-6Al-4V avec un outil en carbure cémenté WC-Co non revêtu. Corrélation de l’intégrité matière par la surveillance de l’opération et la compréhension des mécanismes d’endommagement de l’outil. / Study of deformation mechanisms and metallurgical transformations on the machined surface and within its sub-surface during Ti6Al4V turning with uncoated cemented carbide. Correlations between surface integrity and process monitoring signals with an understanding of tool damages mechanisms

Rancic, Mickael 21 December 2012 (has links)
Ces travaux de thèse s'inscrivent dans le cadre du projet européen ACCENT qui fait suite au projet européen ManHIRP (2001-2005). Ces travaux ont pour objectifs principaux de développer une méthodologie expérimentale permettant d'établir une fenêtre de conditions de coupe garantissant une intégrité matière acceptable de la pièce en Ti-6Al-4V usinée, en s'appuyant sur la mesure des signaux des moyens de surveillance en cours d'usinage.Une attention particulière s'est portée sur l'identification et la classification des anomalies géométriques et de celles produites sur la surface usinée en fonction de la vitesse de coupe et de l'avance. Parallèlement aux investigations sur les anomalies géométriques et de surface, une étude du mode d'endommagement de l'outil en carbure cémenté WC-Co non revêtu et de celle de l'évolution des signaux de surveillance ont conduit à une bien meilleure compréhension des phénomènes liés à la coupe.Les anomalies générées en sous-couche de la surface usinée, telles que les couches à grains déformés et les « couches blanches » ont été étudiées par l'intermédiaire d'analyses métallurgiques fines comme la microsonde de Castaing et par des observations et des analyses au microscope électronique à transmission (MET). Des mesures de microdureté et de contraintes résiduelles ont complété ces analyses chimiques et microstructurales. Aussi, la connaissance de l'état métallurgique et mécanique de ces anomalies a permis de déduire leur genèse et les mécanismes de déformation et de transformations métallurgiques (phases et taille de grains) qui ont opéré en sous-couche du Ti-6Al-4V. Le suivi par la technique de l'analyse d'images des paramètres microstructuraux attachés aux grains globulaires alpha a conduit à mieux comprendre l'écoulement de la matière selon les directions de coupe et d'avance ainsi que les mécanismes de dissolution de ces phases globulaires alpha lorsque l'effet thermique prend le pas sur l'effet mécanique pendant l'usinage. A l'issue de ces caractérisations métallurgiques, des corrélations ont été entreprises entre les anomalies générées et les signaux des moyens de surveillance. Celles-ci s'appuient principalement sur les efforts radiaux dont l'évolution singulière au cours du temps indique l'apparition de défauts. L'amplitude de cet effort radial se corrèle avec la profondeur de la couche de Ti-6Al-4V affectée thermomécaniquement. / The scientific works presented in this thesis is taken place within the framework of an European project ACCENT which is the continuity of the ManHIRP European project (2001-2005). The main aim of these studies is the development of an experimental approach allowing of the determination of an acceptable surface integrity within the validity domain according to the cutting parameters by using the recorded monitoring signals during turning operation. The identification and the classification of the geometrical anomalies generated on the machined surface versus the cutting speed and feed rate were especially investigated. At the same time, damage on uncoated cemented carbide and evolution on process monitoring signals have conducted to a better understanding of cutting phenomena. The anomalies generated within the machined sub-surface such as the highly worked layers and “white layers” were studied with fine metallurgical analysis like the use of Castaing microprobe and transmission electron microscopy (TEM). In addition, micro-hardness and residual stresses measurements have completed the previous analysis. The knowledge of the metallurgical and mechanical states within the sub-layer have enabled to deduce the causes of their formation and to define the deformation mechanisms and the metallurgical transformations (phases and grains size) which have occurred during the machining operation. The tracking of the microstructural features related to globular alpha grains by the investigations of the images analysis have led to a better understanding of material flow according to the cutting and feed directions. Also, the dissolution phenomena of globular alpha grains occurring when the thermal loading becomes preponderant before the mechanical one has been highlighted. After these metallurgical analyses, correlations between the surface integrity and the process monitoring signals have been found. The singular evolution of the radial force signal indicates the anomalies appearance. Its amplitude is linked with the depth of the thermo-mechanically affected sub-layer of the machined surface.
28

Design considerations in high-throughput automation for biotechnology protocols

Unknown Date (has links)
In this dissertation a computer-aided automation design methodology for biotechnology applications is proposed that leads to several design guidelines. Because of the biological nature of the samples that propagate in the automation line, a very specific set of environmental and maximum allowed shelf time conditions have to be followed to obtain good yield. In addition all biotechnology protocols require precise sequence of steps, the samples are scarce and the reagents are costly, so no waste can be afforded. / Includes bibliography. / Dissertation (Ph.D.)--Florida Atlantic University, 2014. / FAU Electronic Theses and Dissertations Collection
29

Maîtrise de l'intégrité de surface par la surveillance d'usinage sur les pièces critiques en superalliage de turbomoteurs aéronautiques / Surface integrity control by process monitoring on machining critical parts in aeronautical turbine engines superalloy

Dutilh, Vincent 24 May 2011 (has links)
Les travaux présentés dans cette thèse s'inscrivent dans le cadre du projet européen ACCENT sur la maitrise de l'intégrité de surface par la surveillance lors de l'usinage des pièces critiques aéronautiques. Des études antérieures ont montrées que des anomalies générées lors de l'usinage peuvent induire un abattement sur la tenue en fatigue des pièces. Dans ce contexte, l'objectif de ces travaux est de modéliser la relation entre l'intégrité de surface et les signaux enregistrés lors du perçage de l'Udimet®720. A partir d'une démarche expérimentale basée sur le Couple-Outil-Matière et les plans d'expériences, une caractérisation de l'usure de l'outil, de l'intégrité de surface et des signaux enregistrés a été effectuées en fonction des conditions d‘usinage et du contexte de l'opération (lubrifiant, dureté matière). Cette caractérisation à permis d'établir des corrélations entre les caractéristiques de l'intégrité de surface et celles des signaux. Ces corrélations ont été modélisées par des méthodes directes et statistiques, afin de mettre en avant la pertinence des capteurs de puissance, d'efforts et d'accélération par rapport aux anomalies d'usinage. Ainsi les capteurs permettent la prédiction de la couche de matière affectée par l'opération quelque soit le contexte d'usinage. / The works presented in this thesis join within the framework of the European project ACCENT on surface integrity control by process monitoring during the manufacturing of aeronautical critical parts.Previous studies showed that anomalies generated during the manufacturing process can reduce the fatigue lifecycle. In this context, the objective of this work is to model the relation between the surface integrity and the signals recorded during the drilling of Udimet®720. From an experimental approach based on Couple-Tool-Material and design of experiments, a characterization of tool wear, surface integrity and recorded signals was made according to the cutting conditions and manufacturing disturbances (lubricant, material hardness).This characterization establishes correlations between the characteristics of surface integrity and those of signals. These correlations were modeled by direct and statistical methods, to advance the relevance of sensors (power, efforts and acceleration) with regard to manufacturing disturbances. So the sensors allow the prediction of the thermo-mechanically affected layer generated by the operation in most of manufacturing conditions.
30

Development of a Prognostic Method for the Production of Undeclared Enriched Uranium

Hooper, David Alan 01 August 2011 (has links)
As global demand for nuclear energy and threats to nuclear security increase, the need for verification of the peaceful application of nuclear materials and technology also rises. In accordance with the Nuclear Nonproliferation Treaty, the International Atomic Energy Agency is tasked with verification of the declared enrichment activities of member states. Due to the increased cost of inspection and verification of a globally growing nuclear energy industry, remote process monitoring has been proposed as part of a next-generation, information-driven safeguards program. To further enhance this safeguards approach, it is proposed that process monitoring data may be used to not only verify the past but to anticipate the future via prognostic analysis. While prognostic methods exist for health monitoring of physical processes, the literature is absent of methods to predict the outcome of decision-based events, such as the production of undeclared enriched uranium. This dissertation introduces a method to predict the time at which a significant quantity of unaccounted material is expected to be diverted during an enrichment process. This method utilizes a particle filter to model the data and provide a Type III (degradation-based) prognostic estimate of time to diversion of a significant quantity. Measurement noise for the particle filter is estimated using historical data and may be updated with Bayesian estimates from the analyzed data. Dynamic noise estimates are updated based on observed changes in process data. The reliability of the prognostic model for a given range of data is validated via information complexity scores and goodness of fit statistics. The developed prognostic method is tested using data produced from the Oak Ridge Mock Feed and Withdrawal Facility, a 1:100 scale test platform for developing gas centrifuge remote monitoring techniques. Four case studies are considered: no diversion, slow diversion, fast diversion, and intermittent diversion. All intervals of diversion and non-diversion were correctly identified and significant quantity diversion time was accurately estimated. A diversion of 0.8 kg over 85 minutes was detected after 10 minutes and predicted to be 84 minutes and 10 seconds after 46 minutes and 40 seconds with an uncertainty of 2 minutes and 52 seconds.

Page generated in 0.4931 seconds