Spelling suggestions: "subject:"canprocess control"" "subject:"3.3vprocess control""
731 |
Process Monitoring with Multivariate Data:Varying Sample Sizes and Linear ProfilesKim, Keunpyo 01 December 2003 (has links)
Multivariate control charts are used to monitor a process when more than one quality variable associated with the process is being observed. The multivariate exponentially weighted moving average (MEWMA) control chart is one of the most commonly recommended tools for multivariate process monitoring. The standard practice, when using the MEWMA control chart, is to take samples of fixed size at regular sampling intervals for each variable. In the first part of this dissertation, MEWMA control charts based on sequential sampling schemes with two possible stages are investigated. When sequential sampling with two possible stages is used, observations at a sampling point are taken in two groups, and the number of groups actually taken is a random variable that depends on the data. The basic idea is that sampling starts with a small initial group of observations, and no additional sampling is done at this point if there is no indication of a problem with the process. But if there is some indication of a problem with the process then an additional group of observations is taken at this sampling point. The performance of the sequential sampling (SS) MEWMA control chart is compared to the performance of standard control charts. It is shown that that the SS MEWMA chart is substantially more efficient in detecting changes in the process mean vector than standard control charts that do not use sequential sampling. Also the situation is considered where different variables may have different measurement costs. MEWMA control charts with unequal sample sizes based on differing measurement costs are investigated in order to improve the performance of process monitoring. Sequential sampling plans are applied to MEWMA control charts with unequal sample sizes and compared to the standard MEWMA control charts with a fixed sample size. The steady-state average time to signal (SSATS) is computed using simulation and compared for some selected sets of sample sizes. When different variables have significantly different measurement costs, using unequal sample sizes can be more cost effective than using the same fixed sample size for each variable.
In the second part of this dissertation, control chart methods are proposed for process monitoring when the quality of a process or product is characterized by a linear function. In the historical analysis of Phase I data, methods including the use of a bivariate <i>T</i>² chart to check for stability of the regression coefficients in conjunction with a univariate Shewhart chart to check for stability of the variation about the regression line are recommended. The use of three univariate control charts in Phase II is recommended. These three charts are used to monitor the <i>Y</i>-intercept, the slope, and the variance of the deviations about the regression line, respectively. A simulation study shows that this type of Phase II method can detect sustained shifts in the parameters better than competing methods in terms of average run length (ARL) performance. The monitoring of linear profiles is also related to the control charting of regression-adjusted variables and other methods. / Ph. D.
|
732 |
Efficient Sampling Plans for Control Charts When Monitoring an Autocorrelated ProcessZhong, Xin 15 March 2006 (has links)
This dissertation investigates the effects of autocorrelation on the performances of various sampling plans for control charts in detecting special causes that may produce sustained or transient shifts in the process mean and/or variance. Observations from the process are modeled as a first-order autoregressive process plus a random error. Combinations of two Shewhart control charts and combinations of two exponentially weighted moving average (EWMA) control charts based on both the original observations and on the process residuals are considered. Three types of sampling plans are investigated: samples of n = 1, samples of n > 1 observations taken together at one sampling point, or samples of n > 1 observations taken at different times. In comparing these sampling plans it is assumed that the sampling rate in terms of the number of observations per unit time is fixed, so taking samples of n = 1 allows more frequent plotting. The best overall performance of sampling plans for control charts in detecting both sustained and transient shifts in the process is obtained by taking samples of n = 1 and using an EWMA chart combination with a observations chart for mean and a residuals chart for variance. The Shewhart chart combination with the best overall performance, though inferior to the EWMA chart combination, is based on samples of n > 1 taken at different times and with a observations chart for mean and a residuals chart for variance. / Ph. D.
|
733 |
GLR Control Charts for Monitoring a ProportionHuang, Wandi 19 December 2011 (has links)
The generalized likelihood ratio (GLR) control charts are studied for monitoring a process proportion of defective or nonconforming items. The type of process change considered is an abrupt sustained increase in the process proportion, which implies deterioration of the process quality. The objective is to effectively detect a wide range of shift sizes.
For the first part of this research, we assume samples are collected using rational subgrouping with sample size n>1, and the binomial GLR statistic is constructed based on a moving window of past sample statistics that follow a binomial distribution. Steady state performance is evaluated for the binomial GLR chart and the other widely used binomial charts. We find that in terms of the overall performance, the binomial GLR chart is at least as good as the other charts. In addition, since it has only two charting parameters that both can be easily obtained based on the approach we propose, less effort is required to design the binomial GLR chart for practical applications.
The second part of this research develops a Bernoulli GLR chart to monitor processes based on the continuous inspection, in which case samples of size n=1 are observed. A constant upper bound is imposed on the estimate of the process shift, preventing the corresponding Bernoulli GLR statistic from being undefined. Performance comparisons between the Bernoulli GLR chart and the other charts show that the Bernoulli GLR chart has better overall performance than its competitors, especially for detecting small shifts. / Ph. D.
|
734 |
Ammonium-Based Aeration Control with Iterative Set-Point Tuning in Wastewater Treatment Plants / Ammoniumreglering med iterativ börvärdesjustering i avloppsreningsverkBärnheim, Tom January 2023 (has links)
In wastewater treatment plants, the amount of ammonium is one example of a measure to determine the quality of the effluent wastewater. Ammonium is regarded as a hazardous chemical for aqueous ecosystems and can cause eutrophication due to its high nitrogen content. The ammonium content in the treated wastewater is controlled by aeration of the biological treatment stage, in which ammonium is converted to nitrate. The aeration process often accounts for the largest energy consumption of the wastewater treatment plant, which motivates automatic control solutions that can both aid in reducing the discharge of ammonium in the effluent and improve the energy efficiency of the aeration process. One such control technique currently used by several large municipal wastewater treatment plants in Sweden is ammonium-based aeration control. In this technique, the aeration process is controlled based on measurements of the effluent ammonium concentration. The purpose of the thesis was to study an extension of ammonium-based aeration control that could better adapt to daily, and often large, fluctuations in the influent load. The proposed method is to use an iterative algorithm to tune the set-point of the ammonium feedback controller. The objective is to, over a given time interval, achieve a flow-proportional mean of the effluent ammonium concentration close to a desired value for a wide range of influent loads. The method was tested by extensive simulations, and the results indicate that the iterative set-point tuning algorithm has the potential to offer a superior ability to achieve a desired flow-proportional mean at the end of a given evaluation period and, in some instances, energy savings compared to standard ammonium feedback control.
|
735 |
A strategy for the synthesis of real-time statistical process control within the framework of a knowledge based controllerCrowe, Edward R. January 1995 (has links)
No description available.
|
736 |
OPTIMIZATION TECHNIQUES FOR PHARMACEUTICAL MANUFACTURING AND DESIGN SPACE ANALYSISDaniel Joseph Laky (13120485) 21 July 2022 (has links)
<p>In this dissertation, numerical analysis frameworks and software tools for digital design of process systems are developed. More specifically, these tools have been focused on digital design within the pharmaceutical manufacturing space. Batch processing represents the traditional and still predominant pathway to manufacture pharmaceuticals in both the drug substance and drug product spaces. Drug substance processes start with raw materials or precursors to produce an active pharmaceutical ingredient (API) through synthesis and purification. Drug product processes take this pure API in powder form, add excipients, and process the powder into consumer doses such as capsules or tablets. Continuous manufacturing has allowed many other chemical industries to take advantage of real-time process management through process control, process optimization, and real-time detection of off-spec material. Also, the possibility to reduce total cleaning time of units and encourage green chemistry through solvent reduction or recycling make continuous manufacturing an attractive alternative to batch manufacturing. However, to fully understand and take advantage of real-time process management, digital tools are required, both as soft sensors during process control or during process design and optimization. Since the shift from batch to continuous manufacturing will proceed in stages, processes will likely adopt both continuous and batch unit operations in the same process, which we will call {\em hybrid} pharmaceutical manufacturing routes. Even though these processes will soon become common in the industry, digital tools that address comparison of batch, hybrid, and continuous manufacturing routes in the pharmaceutical space are lacking. This is especially true when considering hybrid routes. For this reason, PharmaPy, an open-source tool for pharmaceutical process development, was created to address rapid in-silico design of hybrid pharmaceutical processes. Throughout this work, the focus is on analyzing alternative operating modes within the drug substance manufacturing context. First, the mathematical models for PharmaPy's synthesis, crystallization, and filtration units are discussed. Then, the simulation capabilities of PharmaPy are highlighted, showcasing dynamic simulation of both fully continuous and hybrid processes. However, the technical focus of the work as a whole is primarily on optimization techniques for pharmaceutical process design. Thus, many derivative-free optimization frameworks for simulation-optimization were constructed and utilized with PharmaPy performing simulations of pharmaceutical processes. The timeline of work originally began with derivative-based methods to solve mixed-integer programs (MIP) for water network sampling and security, as well as nonlinear programs (NLPs) and some mixed-integer nonlinear programs (MINLPs) for design space and feasibility analysis. Therefore, a method for process design that combines both the ease of implementation from a process simulator (PharmaPy) with the computational performance of derivative-based optimization was implemented. Recent developments in Pyomo through the PyNumero package allow callbacks to an input-output or black-box model while using {\sc Ipopt} as a derivative-based solver through the cyipopt interface. Using this approach, it was found that using a PharmaPy simulation as a black box within a derivative-based solver resulted in quicker solve times when compared with traditional derivative-free optimization strategies, and offers a much quicker implementation strategy than using a simultaneous equation-oriented algebraic definition of the problem. Also, uncertainty exists in virtually all process systems. Traditionally, uncertainty is analyzed through sampling approaches such as Monte Carlo simulation. These sampling approaches quickly become computational obstacles as problem scale increases. In the 1980s, chemical plant design under uncertainty through {\em flexibility analysis} became an option for explicitly considering model uncertainty using mathematical programming. However, such formulations provide computational obstacles of their own as most process models produce challenging MINLPs under the flexibility analysis framework. Specifically when considering pharmaceutical processes, recent initiatives by the FDA have peaked interest in flexibility analysis because of the so called {\em design space}. The design space is the region for which critical quality attributes (CQAs) may be guaranteed over a set of interactions between the inputs and process parameters. Since uncertainty is intrinsic to such operations, industry is interested in guaranteeing that CQAs hold with a set confidence level over a given operating region. In this work, the {\em probabilistic design space} defined by these levels of confidence is presented to address the computational advantages of using a fully model-based flexibility analysis framework instead of a Monte Carlo sampling approach. From the results, it is seen that using the flexibility analysis framework decreased design space identification time by more than two orders of magnitude. Given implementation difficulty with new digital tools for both students and professionals, educational material was developed for PharmaPy and was presented as part of a pharmaceutical API process development course at Purdue. The students were surveyed afterward and many of the students found the framework to be approachable through the use of Jupyter notebooks, and would consider using PharmaPy and Python for pharmaceutical modeling and data analysis in the future, respectively. Through software development and the development of numerical analysis frameworks, digital design of pharmaceutical processes has expanded and become more approachable. The incorporation of rigorous simulations under process uncertainty promotes the use of digital tools in regulatory filings and reduces unnecessary process development costs using model-based design. Examples of these improvements are evident through the development of PharmaPy, a simulation-optimization framework using PharmaPy, and flexibility analysis tools. These tools resulted in a computational benefit of 1 to 2 orders of magnitude when compared to methods used in practice and in some cases reduce the modeling time required to determine optimal operating conditions, or the design space of a pharmaceutical manufacturing process.</p>
|
737 |
Process Intensification of Chemical Systems Towards a Sustainable FutureZewei Chen (13161915) 27 July 2022 (has links)
<p>Cutting greenhouse gas emissions to as close to zero as possible, or ”net-zero”, may be the biggest sustainability goal to be achieved in the next 30 years. While chemical engineering evolved against the backdrop of an abundant supply of fossil resources for chemical production and energy, renewable energy resources such as solar and wind will find more usage in the future. This thesis work develops new concepts, methods and algorithms to identify and synthesize process schemes to address multiple aspects towards sustainable chemical and energy systems. Shale gas can serve as both energy resource and chemical feedstock for the transition period towards a sustainable economy, and has the potential to be a carbon source for the long term. The past two decades have seen increasing natural gas flaring and venting due to the lack of transforming or transportation infrastructure in emerging shale gas producing regions. To reduce carbon emission and wastage of shale resources, an innovative process hierarchy is identified for the valorization of natural gas liquids from shale gas at medium to small scale near the wellhead. This paradigm shift fundamentally changes the sequencing of various separation and reaction steps and results in dramatically simplified and intensified process flowsheets. The resulting processes could achieve over 20% lower capital with a higher recovery of products. Historically, heat energy is supplied to chemical plants by burning fossil resources. However, in future, with the emphasis on greenhouse gas reduction, renewable energy resources will find more usage. Renewable electricity from photovoltaic and wind has now become competitive with the electricity from fossil resources. Therefore, a major challenge for chemical engineering processes is how to use renewable electricity efficiently within a chemical plant and eliminate any carbon dioxide release from chemical plants. We introduce several decarbonization flowsheets for the process to first convert natural gas liquids (NGLs) to mainly ethylene in an energy intensive dehydrogenation reactor and subsequent conversion of ethylene into value-added and easy-to-transport liquid fuels. </p>
<p><br></p>
<p>Molecular separations are needed across many types of industries, including oil and gas, food, pharmaceutical, and chemical industries. In a chemical plant, 40%–60% of energy and capital cost is tied to separation processes. For widespread use of membrane-based processes for high recovery and purity products from gaseous and liquid mixtures on an industrial scale, availability of models that allow the use of membrane cascades at their optimal operating modes is desirable towards sustainable separation systems. This will also enable proper comparison of membrane performance vis-a-vis other competing separation technologies. However, such a model for multicomponent fluid separation has been missing from the literature. We have developed an MINLP global optimization algorithm that guarantees the identification of minimum power consumption of multicomponent membrane cascades. The proposed optimization algorithm is implemented in GAMS and is demonstrated to have the capability to solve up to 4-component and 5-stage membrane cascades via BARON solver, which is significantly more advantageous than the state-of-the-art processes. The model is currently being further developed to include optimization of total cost including capital. Such a model holds the promise to be useful for the development in implementation of energy-efficient separation plants with least carbon footprint. This thesis work also addresses important topics in separation including dividing wall columns and water desalination. </p>
|
738 |
The road towards the flawless residence : A case study in process- and quality management / Vägen till den felfria bostaden : En fallstudie i process- och kvalitetsstyrningAndersson, Conny, Björk, David January 2021 (has links)
This study was initiated to examine whether processes of the construction company JM ABare followed and how they can be streamlined, if the targeted goals are reached, and whenand why prescribed routines are abandoned. The study was carried out through onlinequalitative interviews with supervisors at nine examined construction projects, and withofficials within the JM AB organization.The correspondence with employees emphasized the importance of communication,experience feedback, and quality- and process control. Continuous improvement is central atJM AB, and the study shows that proactive work against errors and flaws in production andwarranty assurance can be further strengthened through enhanced communication andexperience feedback.The results suggest that resource- and cost inefficiencies can be reduced through preventiveefforts, emphasis on comprehensibility in the internal quality goals, and a thoroughconsequence analysis in case of late administrative changes in the construction process.Furthermore, the results show the importance of clear communication towards the endcustomer, as well as the value in proactive declaration of expected deviations; this will leadtowards higher CSI, Customer-Service-Index The study is limited to processes and methodsused within JM AB and is primarily focused on the production phase and warrantyassurance.This study resulted in several suggestions for improvement related to customercommunication, centralized internal quality controls, and more efficient internalcommunication and experience feedback. The results also stress the importance of clearlycommunicating JM’s desired quality level to both subcontractors and during procurement. / Studien ämnade att undersöka hur JMs framtagna arbetsprocesser efterföljs och kaneffektiviseras, om uppsatta mål uppnås samt när och varför beskrivna rutiner frångås.Studien genomfördes på distans genom kvalitativa intervjuer med arbetsledare i utvaldaprojekt, samt tjänstemän inom företagets organisation. Vid intervjuer och samtal medanställda betonades vikten av kommunikation, erfarenhetsåterföring, kvalitets- ochprocesstyrning.Ständig förbättring är centralt på JM och i resultaten framgår att det proaktiva arbetet för attmotverka fel och brister i produktion och garantiskedet kan stärkas genom förbättradkommunikation och erfarenhetsåterföring. Genom preventiva insatser och stor vikt påtydlighet kring företagets interna kvalitetskrav samt konsekvensanalys vid ändringar avunderlag i tidiga byggskeden kan resurs- och kostnadskrävande moment minimeras.Resultaten påvisade vikten av god kommunikation gentemot slutkund samt värdet i attproaktivt kommunicera förväntade avvikelser för att på sikt höja NKI, Nöjd-Kund-Index.Studien är avgränsad till JMs processer och arbetsmetoder samt fokuserar främst påprocesser relaterade till produktion och garanti.Studien resulterade i ett flertal förbättringsförslag relaterade till kundkommunikation,centraliserad försyn, samt effektiviserad intern kommunikation och erfarenhetsåterföring,men främst redovisas vikten att i upphandling och till underentreprenörer tydliggöra denhöga kvalitetsnivå JM eftersträvar.
|
739 |
Architectural Enhancements to Increase Trust in Cyber-Physical Systems Containing Untrusted Software and HardwareFarag, Mohammed Morsy Naeem 25 October 2012 (has links)
Embedded electronics are widely employed in cyber-physical systems (CPSes), which tightly integrate and coordinate computational and physical elements. CPSes are extensively deployed in security-critical applications and nationwide infrastructure. Perimeter security approaches to preventing malware infiltration of CPSes are challenged by the complexity of modern embedded systems incorporating numerous heterogeneous and updatable components. Global supply chains and third-party hardware components, tools, and software limit the reach of design verification techniques and introduce security concerns about deliberate Trojan inclusions. As a consequence, skilled attacks against CPSes have demonstrated that these systems can be surreptitiously compromised. Existing run-time security approaches are not adequate to counter such threats because of either the impact on performance and cost, lack of scalability and generality, trust needed in global third parties, or significant changes required to the design flow.
We present a protection scheme called Run-time Enhancement of Trusted Computing (RETC) to enhance trust in CPSes containing untrusted software and hardware. RETC is complementary to design-time verification approaches and serves as a last line of defense against the rising number of inexorable threats against CPSes. We target systems built using reconfigurable hardware to meet the flexibility and high-performance requirements of modern security protections. Security policies are derived from the system physical characteristics and component operational specifications and translated into synthesizable hardware integrated into specific interfaces on a per-module or per-function basis. The policy-based approach addresses many security challenges by decoupling policies from system-specific implementations and optimizations, and minimizes changes required to the design flow. Interface guards enable in-line monitoring and enforcement of critical system computations at run-time. Trust is only required in a small set of simple, self-contained, and verifiable guard components. Hardware trust anchors simultaneously addresses the performance, flexibility, developer productivity, and security requirements of contemporary CPSes.
We apply RETC to several CPSes having common security challenges including: secure reconfiguration control in reconfigurable cognitive radio platforms, tolerating hardware Trojan threats in third-party IP cores, and preserving stability in process control systems. High-level architectures demonstrated with prototypes are presented for the selected applications. Implementation results illustrate the RETC efficiency in terms of the performance and overheads of the hardware trust anchors. Testbenches associated with the addressed threat models are generated and experimentally validated on reconfigurable platform to establish the protection scheme efficacy in thwarting the selected threats. This new approach significantly enhances trust in CPSes containing untrusted components without sacrificing cost and performance. / Ph. D.
|
740 |
Surveillance of Negative Binomial and Bernoulli ProcessesSzarka, John Louis III 03 May 2011 (has links)
The evaluation of discrete processes are performed for industrial and healthcare processes. Count data may be used to measure the number of defective items in industrial applications or the incidence of a certain disease at a health facility. Another classification of a discrete random variable is for binary data, where information on an item can be classified as conforming or nonconforming in a manufacturing context, or a patient's status of having a disease in health-related applications.
The first phase of this research uses discrete count data modeled from the Poisson and negative binomial distributions in a healthcare setting. Syndromic counts are currently monitored by the BioSense program within the Centers for Disease Control and Prevention (CDC) to provide real-time biosurveillance. The Early Aberration Reporting System (EARS) uses recent baseline information comparatively with a current day's syndromic count to determine if outbreaks may be present. An adaptive threshold method is proposed based on fitting baseline data to a parametric distribution, then calculating an upper-tailed p-value. These statistics are then converted to an approximately standard normal random variable. Monitoring is examined for independent and identically distributed data as well as data following several seasonal patterns. An exponentially weighted moving average (EWMA) chart is also used for these methods. The effectiveness of these methods in detecting simulated outbreaks in several sensitivity analyses is evaluated.
The second phase of research explored in this dissertation considers information that can be classified as a binary event. In industry, it is desirable to have the probability of a nonconforming item, p, be extremely small. Traditional Shewhart charts such as the p-chart, are not reliable for monitoring this type of process. A comprehensive literature review of control chart procedures for this type of process is given. The equivalence between two cumulative sum (CUSUM) charts, based on geometric and Bernoulli random variables is explored. An evaluation of the unit and group--runs (UGR) chart is performed, where it is shown that the in--control behavior of this chart is quite misleading and should not be recommended for practitioners. / Ph. D.
|
Page generated in 0.062 seconds