• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 61
  • 12
  • 5
  • 3
  • 3
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 112
  • 112
  • 23
  • 17
  • 16
  • 15
  • 12
  • 12
  • 12
  • 11
  • 10
  • 10
  • 10
  • 10
  • 10
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Optical Sensor for Measurement of Clad Height during Laser Cladding Process

Asselin, Matthew January 2006 (has links)
The process of laser cladding consists of depositing successive layers of molten metallic powder to create a near-net shape. A high-power laser is used to melt incoming metallic powder, which forms a melt pool on the surface. As the latter moves beneath the laser, this newly created melt pool solidifies. By properly controlling the trajectory of deposition tracks, one can create a diverse range of shapes with varying complexities. However, the process is very sensitive to parameters, requiring constant attention from technicians. This lends itself perfectly to the addition of automatic controllers whereby supervision is minimal. <br /><br /> In this thesis, an optical sensor is developed to monitor the process zone. The sensor will output a measurement of the height of solidified clad, which in turn can be used by a controller to adjust this geometrical feature. The thesis is divided into three main parts, each contributing to the final algorithm. <br /><br /> First, in Chapter 3 an analysis is performed on the light irradiating from the interaction zone (or melt pool). It is stated that the dominating source of light is governed by blackbody radiation from this molten metal. This is confirmed by analyzing a series of images captured through a digital camera, where various narrow bandpass filters were utilized to selectively view a portion of the CCD-sensor's spectrum. This investigation also leads to the selection of bandpass filter such that a stable, relatively intense melt pool is captured through the digital camera's CCD-sensor. <br /><br /> Second, in Chapter 4 the captured images are taken through a pair of image processing techniques, outputting a series of coordinates representating the melt pool's boundary. The image is first analyzed to calculate an optimal threshold level based on the minimization of fuzzy entropy. With this threshold selected, the grayscale image is converted into black-and-white, where the white pixels represent the melt pool. After this step, the melt pool's boundary is extracted through an 8-connectivity border tracing algorithm. This technique outputs a series of coordinates (in pixels) as though one were traveling along the melt pool in a clockwise rotation. <br /><br /> Last, Chapter 5 analyzes these pixel coordinates to extract the melt pool's height. The coordinates are first transformed into real-world coordinates, by use of a perspective transformation. This transformation essentially yields the melt pool's shadow, as created by a light-source coincident with the camera. As a result, the melt pool's height is estimated based upon a right-angle triangle, where the camera's angle is known, and the projected coordinates represent the shadow length (triangle's base). <br /><br /> The result of applying this series of steps to the estimation of clad heights is found at the end of Chapter 5. Results varied dramatically, from 4% error to 393%. Although the errors are large at times, they are mainly caused by a bias in the estimate. That is, the dynamics of the true clad formation are very well predicted by the algorithm, however, shifting by a certain amount. This amount varies both with substrate velocity, and the clad's direction of travel, relative to the camera. A partial explanation is given such that the clad's height is offset from the laser center-point, which is a function of both these parameters. However, the specific relationship requires further experimentation.
32

Flexible Mixed-Effect Modeling of Functional Data, with Applications to Process Monitoring

Mosesova, Sofia 29 May 2007 (has links)
High levels of automation in manufacturing industries are leading to data sets of increasing size and dimension. The challenge facing statisticians and field professionals is to develop methodology to help meet this demand. Functional data is one example of high-dimensional data characterized by observations recorded as a function of some continuous measure, such as time. An application considered in this thesis comes from the automotive industry. It involves a production process in which valve seats are force-fitted by a ram into cylinder heads of automobile engines. For each insertion, the force exerted by the ram is automatically recorded every fraction of a second for about two and a half seconds, generating a force profile. We can think of these profiles as individual functions of time summarized into collections of curves. The focus of this thesis is the analysis of functional process data such as the valve seat insertion example. A number of techniques are set forth. In the first part, two ways to model a single curve are considered: a b-spline fit via linear regression, and a nonlinear model based on differential equations. Each of these approaches is incorporated into a mixed effects model for multiple curves, and multivariate process monitoring techniques are applied to the predicted random effects in order to identify anomalous curves. In the second part, a Bayesian hierarchical model is used to cluster low-dimensional summaries of the curves into meaningful groups. The belief is that the clusters correspond to distinct types of processes (e.g. various types of “good” or “faulty” assembly). New observations can be assigned to one of these by calculating the probabilities of belonging to each cluster. Mahalanobis distances are used to identify new observations not belonging to any of the existing clusters. Synthetic and real data are used to validate the results.
33

Optical Sensor for Measurement of Clad Height during Laser Cladding Process

Asselin, Matthew January 2006 (has links)
The process of laser cladding consists of depositing successive layers of molten metallic powder to create a near-net shape. A high-power laser is used to melt incoming metallic powder, which forms a melt pool on the surface. As the latter moves beneath the laser, this newly created melt pool solidifies. By properly controlling the trajectory of deposition tracks, one can create a diverse range of shapes with varying complexities. However, the process is very sensitive to parameters, requiring constant attention from technicians. This lends itself perfectly to the addition of automatic controllers whereby supervision is minimal. <br /><br /> In this thesis, an optical sensor is developed to monitor the process zone. The sensor will output a measurement of the height of solidified clad, which in turn can be used by a controller to adjust this geometrical feature. The thesis is divided into three main parts, each contributing to the final algorithm. <br /><br /> First, in Chapter 3 an analysis is performed on the light irradiating from the interaction zone (or melt pool). It is stated that the dominating source of light is governed by blackbody radiation from this molten metal. This is confirmed by analyzing a series of images captured through a digital camera, where various narrow bandpass filters were utilized to selectively view a portion of the CCD-sensor's spectrum. This investigation also leads to the selection of bandpass filter such that a stable, relatively intense melt pool is captured through the digital camera's CCD-sensor. <br /><br /> Second, in Chapter 4 the captured images are taken through a pair of image processing techniques, outputting a series of coordinates representating the melt pool's boundary. The image is first analyzed to calculate an optimal threshold level based on the minimization of fuzzy entropy. With this threshold selected, the grayscale image is converted into black-and-white, where the white pixels represent the melt pool. After this step, the melt pool's boundary is extracted through an 8-connectivity border tracing algorithm. This technique outputs a series of coordinates (in pixels) as though one were traveling along the melt pool in a clockwise rotation. <br /><br /> Last, Chapter 5 analyzes these pixel coordinates to extract the melt pool's height. The coordinates are first transformed into real-world coordinates, by use of a perspective transformation. This transformation essentially yields the melt pool's shadow, as created by a light-source coincident with the camera. As a result, the melt pool's height is estimated based upon a right-angle triangle, where the camera's angle is known, and the projected coordinates represent the shadow length (triangle's base). <br /><br /> The result of applying this series of steps to the estimation of clad heights is found at the end of Chapter 5. Results varied dramatically, from 4% error to 393%. Although the errors are large at times, they are mainly caused by a bias in the estimate. That is, the dynamics of the true clad formation are very well predicted by the algorithm, however, shifting by a certain amount. This amount varies both with substrate velocity, and the clad's direction of travel, relative to the camera. A partial explanation is given such that the clad's height is offset from the laser center-point, which is a function of both these parameters. However, the specific relationship requires further experimentation.
34

Flexible Mixed-Effect Modeling of Functional Data, with Applications to Process Monitoring

Mosesova, Sofia 29 May 2007 (has links)
High levels of automation in manufacturing industries are leading to data sets of increasing size and dimension. The challenge facing statisticians and field professionals is to develop methodology to help meet this demand. Functional data is one example of high-dimensional data characterized by observations recorded as a function of some continuous measure, such as time. An application considered in this thesis comes from the automotive industry. It involves a production process in which valve seats are force-fitted by a ram into cylinder heads of automobile engines. For each insertion, the force exerted by the ram is automatically recorded every fraction of a second for about two and a half seconds, generating a force profile. We can think of these profiles as individual functions of time summarized into collections of curves. The focus of this thesis is the analysis of functional process data such as the valve seat insertion example. A number of techniques are set forth. In the first part, two ways to model a single curve are considered: a b-spline fit via linear regression, and a nonlinear model based on differential equations. Each of these approaches is incorporated into a mixed effects model for multiple curves, and multivariate process monitoring techniques are applied to the predicted random effects in order to identify anomalous curves. In the second part, a Bayesian hierarchical model is used to cluster low-dimensional summaries of the curves into meaningful groups. The belief is that the clusters correspond to distinct types of processes (e.g. various types of “good” or “faulty” assembly). New observations can be assigned to one of these by calculating the probabilities of belonging to each cluster. Mahalanobis distances are used to identify new observations not belonging to any of the existing clusters. Synthetic and real data are used to validate the results.
35

Development of a Real-Time Detection Strategy for Material Accountancy and Process Monitoring During Nuclear Fuel Reprocessing Using the Urex+3A Method

Goddard, Braden 2009 December 1900 (has links)
Reprocessing nuclear fuel is becoming more viable in the United States due to the anticipated increase in construction of nuclear power plants, the growing stockpile of existing used nuclear fuel, and a public desire to reduce the amount of this fuel. However, a new reprocessing facility in non-weapon states must be safeguarded and new reprocessing facilities in weapon states will likely have safeguards due to political and material accountancy reasons. These facilities will have state of the art controls and monitoring methods to safeguard special nuclear materials, as well as to provide real-time monitoring. The focus of this project is to enable the development of a safeguards strategy that uses well established photon measurement methods to characterize samples from the UREX+3a reprocessing method using a variety of detector types and measurement times. It was determined that the errors from quantitative measurements were too large for traditional safeguards methods; however, a safeguards strategy based on qualitative gamma ray and neutron measurements is proposed. The gamma ray detection equipment used in the safeguard strategy could also be used to improve the real-time process monitoring in a yet-to-be built facility. A facility that had real-time gamma detection equipment could improve product quality control and provide additional benefits, such as waste volume reduction. In addition to the spectral analyses, it was determined by Monte Carlo N Particle (MCNP) simulations that there is no noticeable self shielding for internal pipe diameters less than 2 inches, indicating that no self shielding correction factors are needed. Further, it was determined that HPGe N-type detectors would be suitable for a neutron radiation environment. Finally, the gamma ray spectra for the measured samples were simulated using MCNP and then the model was extended to predict the responses from an actual reprocessing scenario from UREX+3a applied to fuel that had a decay time of three years. The 3-year decayed fuel was more representative of commercially reprocessed fuel than the acquired UREX+3a samples. This research found that the safeguards approach proposed in this paper would be best suited as an addition to existing safeguard strategies. Real-time gamma ray detection for process monitoring would be beneficial to a reprocessing facility and could be done with commercially available detectors.
36

Data driven process monitoring based on neural networks and classification trees

Zhou, Yifeng 01 November 2005 (has links)
Process monitoring in the chemical and other process industries has been of great practical importance. Early detection of faults is critical in avoiding product quality deterioration, equipment damage, and personal injury. The goal of this dissertation is to develop process monitoring schemes that can be applied to complex process systems. Neural networks have been a popular tool for modeling and pattern classification for monitoring of process systems. However, due to the prohibitive computational cost caused by high dimensionality and frequently changing operating conditions in batch processes, their applications have been difficult. The first part of this work tackles this problem by employing a polynomial-based data preprocessing step that greatly reduces the dimensionality of the neural network process model. The process measurements and manipulated variables go through a polynomial regression step and the polynomial coefficients, which are usually of far lower dimensionality than the original data, are used to build a neural network model to produce residuals for fault classification. Case studies show a significant reduction in neural model construction time and sometimes better classification results as well. The second part of this research investigates classification trees as a promising approach to fault detection and classification. It is found that the underlying principles of classification trees often result in complicated trees even for rather simple problems, and construction time can excessive for high dimensional problems. Fisher Discriminant Analysis (FDA), which features an optimal linear discrimination between different faults and projects original data on to perpendicular scores, is used as a dimensionality reduction tool. Classification trees use the scores to separate observations into different fault classes. A procedure identifies the order of FDA scores that results in a minimum tree cost as the optimal order. Comparisons to other popular multivariate statistical analysis based methods indicate that the new scheme exhibits better performance on a benchmarking problem.
37

Διερεύνηση και ανάπτυξη συστήματος παρακολούθησης και ελέγχου της διεργασίας διάτρησης με παλμούς δέσμης laser σε πραγματικό χρόνο

Στουρνάρας, Αριστείδης 07 September 2009 (has links)
Το αντικείμενο της παρούσας διατριβής είναι η διερεύνηση και η ανάπτυξη μεθόδου παρακολούθησης της διεργασίας διάτρησης με παλμούς δέσμης Laser (Laser drilling). Η διεργασία διάτρησης με δέσμη Laser περιλαμβάνει την χρήση μιας εστιασμένης δέσμης Laser υψηλής ενεργειακής πυκνότητας για την διάτρηση του υλικού μέσω της τήξης ή ακόμα και εξάτμισης του. Η παρούσα διατριβή επικεντρώνεται στην ανάπτυξη ενός συστήματος παρακολούθησης ικανό να προβλέψει σε πραγματικό χρόνο τα διάφορα γεωμετρικά χαρακτηριστικά του αποτελέσματος της διεργασίας αξιοποιώντας οπτικές αλλά και ακουστικές εκπομπές προερχόμενες από την περιοχή κατεργασίας. Τα σήματα που καταγράφονται μέσω φωτοδιόδων και μικροφώνων, σχετίζονται με την εξέλιξη της διεργασίας και κατά συνέπεια μπορούν να χρησιμοποιηθούν για την αξιολόγηση της. Θεωρητική ανάλυση της απόκρισης των αισθητήρων σε σχέση με την γεωμετρία της οπής που δημιουργείται, οδήγησε στην δημιουργία θεωρητικών μοντέλων ικανών να προβλέψουν άμεσα τα γεωμετρικά χαρακτηριστικά της οπής, όπως είναι το βάθος και η διάμετρος εισόδου, χρησιμοποιώντας τα σήματα που λαμβάνονται κατά την διάρκεια της διεργασίας. Παράλληλα, η ανάπτυξη εμπειρικών προτύπων συσχέτισης των γεωμετρικών χαρακτηριστικών με τις παραμέτρους της διεργασίας, προκειμένου να αξιοποιηθούν για τον προγραμματισμό και τον έλεγχο της διεργασίας, αποτέλεσε αντικείμενο έρευνας της παρούσης διατριβής. Το σημαντικότερο συμπέρασμα που προκύπτει από την συγκεκριμένη διατριβή είναι ότι τόσο τα οπτικά όσο και τα ακουστικά σήματα που δημιουργούνται κατά την διάρκεια της διεργασίας διάτρησης με δέσμη Laser μπορούν να αξιοποιηθούν για την παρακολούθηση της διεργασίας σε πραγματικό χρόνο και κατά συνέπεια για τον άμεσο έλεγχο της διασφαλίζοντας έτσι την επίτευξη των στόχων και απαιτήσεων που ορίζονται, όσον αφορά τα γεωμετρικά χαρακτηριστικά της οπής. / The main objective of the present study is the investigation and development of a real-time monitoring method of the percussion laser drilling process. Percussion laser drilling process involves the utilization of a focused laser beam, presenting high power density to melt of vaporize the material. The present work is focalized in the development of a process monitoring system capable to predict the geometrical characteristics of the drill in real time utilizing optical and acoustic emissions originating from the processing zone. The signals acquired by means of photodiodes and microphone, are correlated with the process evolution and consequently can be used for the evaluation of the process output. Theoretical analysis of sensors’ response in relation with the geometry of the drill led in the development of sensor models able to predict the geometrical characteristics of the drill, such as the depth and entrance diameter, utilizing the signals acquired during the process. Additionally, the development of empirical models, correlating hole’s geometrical characteristics with process parameters, able to be utilized for the off-line programming as also and the control of the process, comprised research topic of the present dissertation. The main conclusion of the present work is that both optical and acoustic signals originating from the processing zone during the laser drilling process, can be utilized for real-time process monitoring and consequently for process control purposes ensuring that the final product of the process is within the defined requirements, as far as geometry of the hole is concerned.
38

Monitoring of an Antigen Manufacturing Process Using Fluorescence

Zavatti, Vanessa 12 June 2015 (has links)
Bordetella pertussis is one of two Gram-negative bacteria responsible for causing whooping cough in humans, a highly contagious disease that infects the human upper respiratory tract. Whole-cell and acellular vaccines have been developed but due to side-effects resulting from whole-cell vaccines, acellular vaccines are currently preferred to prevent this disease. A second bacterium known to cause whooping cough is Bordetella parapertussis, but since it causes less aggressive symptoms, only B. pertussis is utilized in the manufacture of the vaccine. One acellular vaccine is based on four virulence factors: pertussis toxin (PT), filamentous hemagglutinin (FHA), pertactin (PRN), and fimbriae (FIM). The focus of this thesis was to explore the use of spectrofluorometry for monitoring and forecasting the performance of the upstream and downstream operations in the PRN purification process at Sanofi Pasteur. The upstream fermentation process includes a series of reactors of increasing volume where the microorganism is grown under controlled conditions. The PRN purification process involves a series of sequential steps for separating this protein from other proteins for later use in the vaccine. The PRN is precipitated in three steps with ammonium sulphate at three different concentrations. The pellet is collected by centrifugation and dissolved in a buffer solution followed by chromatographic separation. The run-through is then ultra-filtered and diafiltered in two separate steps. The resulting concentrate is dissolved in water and subjected to another chromatographic step and diafiltration. The final filtration of PRN involves a pre-filtration and sterile filtration. Finally, the samples are collected for quality control. The objective of this work was to monitor the process at different steps of the upstream and downstream purification process by multi-wavelength fluorescence spectroscopy in combination with multi-variate statistical methods. From the spectra, it was possible to identify fluorescent compounds, such as amino acids and enzyme cofactors, without performing an additional pre-treatment or purification step. Also, the identification of conformational changes in proteins and formation of complexes, such as NAD(P)-enzyme complex, was possible based on the shift in the emission peaks of the compounds identified. These results demonstrated the feasibility of using this tool for qualitative evaluation of the process. Multivariate methods, such as PCA and PLS, were used to extract relevant information and compress the fluorescence data acquired. PCA was effective for correlating variability in the yield of pertactin to a particular fluorescence fingerprint. As a result of the study, it was concluded that a possible source of variability in the productivity that is observed might be a metabolic shift during the fermentation steps that leads to the accumulation of NAD(P)H (or NAD(P)H-enzyme complex) probably due to oxygen transfer limitations. This conclusion was reached after investigating changes in the dissolved oxygen, aeration, agitation, supplementation time and key metabolites (lactate, glucose, glutamine) profiles. The correlation of these parameters with low productivity it was not straightforward; however, some consistencies were observed, for example, high levels of glutamine in batches with low productivity. This fact might be related to the start of the supplementation time, which may be related to the dissolved oxygen, since the addition of the supplement is done manually when an increase of the dissolved oxygen is detected. It is believed that this factor is related to the low production of protein product, such as pertactin. By means of PLS, it was possible to build regression models that allow for predicting the final concentration of pertactin from the fluorescence measurements. The models were built using the new variables obtained from data compression performed with PCA, and the final pertactin concentration measured by a Kjeldahl test. With this method, two regressions were constructed: (i) between NAD(P)H-enzyme complex spectra from the fermenters and pertactin concentration and (ii) between the pertactin fluorescence spectra from the last step of purification and pertactin concentration. A third model was built using the protein content, the NAD(P)H-enzyme complex content in the fermenters and pertactin concentration. Attempts were made to identify the possible enzyme that may bind to NAD(P)H, assumed to be a dehydrogenase. Substrates for different enzymes were employed with the objective of measuring changes in the fluorescence of the characteristic peak for this binding (Ex/Em=280/460 nm). Major changes were detected after addition of the substrates oxaloacetate, ubiquinone and succinate dehydrogenase. Since changes were detected with more than one substrate, it was not possible to unequivocally identify the enzyme; however, the results provide some insight into what may be happening at the metabolic level. The work carried out in this thesis involved both analysis of samples provided or collected by the industrial sponsor as well as analysis of samples prepared at the University of Waterloo for measurement, interpretation and calibration. The proposed fluorescence-based method was found suitable for assessing protein quantity as well as for providing an indication of possible protein aggregation and conformational changes. Future work will be required to identify the exact source of variability in the production of pertactin, by means of monitoring the evolution of fermentation, NAD(P)H and ATP measurements, and oxidation redox potential assays.
39

Methods for the Expansion of Additive Manufacturing Process Space and the Development of In-Situ Process Monitoring Methodologies

Scime, Luke Robson 01 May 2018 (has links)
Metal Additive Manufacturing (AM) promises an era of highly flexible part production, replete with unprecedented levels of design freedom and inherently short supply chains. But as AM transitions from a technology primarily used for prototyping to a viable manufacturing method, many challenges must first be met before these dreams can become reality. In order for machine users to continue pushing the design envelope, process space must be expanded beyond the limits currently recommended by the machine manufacturers. Furthermore, as usable process space expands and demands for reduced operator burden and mission-critical parts increase, in-situ monitoring of the processes will become a greater necessity. Processing space includes both the parameters (e.g. laser beam power and travel velocity) and the feedstock used to build a part. The correlation between process parameters and process outcomes such as melt pool geometry, melt pool variability, and defects should be understood by machine users to allow for increased design freedom and ensure part quality. In this work, an investigation of the AlSi10Mg alloy in a Laser Powder Bed Fusion (L-PBF) process is used as a case study to address this challenge. Increasing the range (processing space) of available feedstocks beyond those vetted by the machine manufacturers has the potential to reduce costs and reassure industries sensitive to volatile global supply chains. In this work, four non-standard metal powders are successfully used to build parts in an L-PBF process. The build quality is compared to that of a standard powder (supplied by the machine manufacturer), and correlations are found between the mean powder particle diameters and as-built part quality. As user-custom parameters and feedstocks proliferate, an increased degree of process outcome variability can be expected, further increasing the need for non-destructive quality assurance and the implementation of closed-loop control schema. This work presents two Machine Learning-based Computer Vision algorithms capable of autonomously detecting and classifying anomalies during the powder spreading stage of L-PBF processes. While initially developed to serve as the monitoring component in a feedback control system, the final algorithm is also a powerful data analytics tool – enabling the study of build failures and the effects of fusion processing parameters on powder spreading. Importantly, many troubling defects (such as porosity) in AM parts are too small to be detected by monitoring the entire powder bed; for this reason, an autonomous method for detecting changes in melt pool morphology via a high speed camera is presented. Finally, Machine Learning techniques are applied to the in-situ melt pool morphology data to enable the study of melt pool behavior during fusion of non-bulk part geometries.
40

A Business Process Management Methodology for Care Process Monitoring

Mokahhal, Mohamed Anis January 2016 (has links)
Reporting patient states is considered an important part of care process monitoring in the hospital to efficiently monitor how well the health care system is performing. Monitoring care processes with enough fine-grained detail to precisely track wait states and service states in order to reduce wait times and improve their quality of care are challenging. Business Process Management (BPM) technology is used to bring care processes online, but there is no clear methodology on how to integrate performance management into BPM tools in a systematic matter that is effective, and minimizes complications and development costs. This thesis proposes a BPM methodology for care process monitoring that structures how to integrate performance monitoring into BPM. The major contribution of this thesis includes a generic methodology for care processes monitoring that describes how to structure and instrument a business process model for systematic care process monitoring which includes support for handoff points between organizations where many wait-time bottlenecks occur. It also includes a prototype implementation based on an existing case study based on a real cardiology care process from an Ontario hospital. Our results are evaluated using three different prototypes based on this same care process. The research methodology for the thesis is based on Design-Science research.

Page generated in 0.5153 seconds