161 |
Simultaneous Design and Control of Chemical Plants: A Robust Modelling ApproachRicardez Sandoval, Luis Alberto January 2008 (has links)
This research work presents a new methodology for the simultaneous design and control of chemical processes. One of the most computationally demanding tasks in the integration of process control and process design is the search for worst case scenarios that result in maximal output variability or in process variables being at their constraint limits. The key idea in the current work is to find these worst scenarios by using tools borrowed from robust control theory. To apply these tools, the closed-loop dynamic behaviour of the process to be designed is represented as a robust model. Accordingly, the process is mathematically described by a nominal linear model with uncertain model parameters that vary within identified ranges of values. These robust models, obtained from closed-loop identification, are used in the present method to test the robust stability of the process and to estimate bounds on the worst deviations in process variables in response to external disturbances.
The first approach proposed to integrate process design and process control made use of robust tools that are based on the Quadratic Lyapunov Function (QLF). These tests require the identification of an uncertain state space model that is used to evaluate the process asymptotic stability and to estimate a bound (γ) on the random-mean squares (RMS) gain of the model output variability. This last bound is used to assess the worst-case process variability and to evaluate bounds on the deviations in process variables that are to be kept within constraints. Then, these robustness tests are embedded within an optimization problem that seeks for the optimal design and controller tuning parameters that minimize a user-specified cost function. Since the value of γ is a bound on one standard deviation of the model output variability, larger multiples of this value, e.g. 2γ, 3γ, were used to provide more realistic bounds on the worst deviations in process variables. This methodology (γ-based) was applied to the simultaneous design and control of a mixing tank process. Although this approach resulted in conservative designs, it posed a nonlinear constrained optimization problem that required less computational effort than that required by a Dynamic Programming approach which had been the main method previously reported in the literature.
While the γ-based robust performance criterion provides a random-mean squares measure of the variability, it does not provide information on the worst possible deviation. In order to search for the worst deviation, the present work proposed a new robust variability measure based on the Structured Singular Value (SSV) analysis, also known as the μ-analysis. The calculation of this measure also returns the critical time-dependent profile in the disturbance that generates the maximum model output error. This robust measure is based on robust finite impulse response (FIR) closed-loop models that are directly identified from simulations of the full nonlinear dynamic model of the process. As in the γ-based approach, the simultaneous design and control of the mixing tank problem was considered using this new μ-based methodology. Comparisons between the γ-based and the μ-based strategies were discussed. Also, the computational time required to assess the worst-case process variability by the proposed μ-based method was compared to that required by a Dynamic Programming approach. Similarly, the expected computational burden required by this new μ-based robust variability measure to estimate the worst-case variability for large-scale processes was assessed. The results show that this new robust variability tool is computationally efficient and it can be potentially implemented to achieve the simultaneous design and control of chemical plants.
Finally, the Structured Singular Value-based (μ-based) methodology was used to perform the simultaneous design and control of the Tennessee Eastman (TE) process. Although this chemical process has been widely studied in the Process Systems Engineering (PSE) area, the integration of design and control of this process has not been previously studied. The problem is challenging since it is open-loop unstable and exhibits a highly nonlinear dynamic behaviour. To assess the contributions of different sections of the TE plant to the overall costs, two optimization scenarios were considered. The first scenario considered only the reactor’s section of the TE process whereas the second scenario analyzed the complete TE plant.
To study the interactions between design and control in the reactor’s section of the plant, the effect of different parameters on the resulting design and control schemes were analyzed. For this scenario, an alternative calculation of the variability was considered whereby this variability was obtained from numerical simulations of the worst disturbance instead of using the analytical μ-based bound. Comparisons between the analytical bound based strategy and the simulation based strategy were discussed. Additionally, a comparison of the computational effort required by the present solution strategy and that required by a Dynamic Programming based approach was conducted.
Subsequently, the topic of parameter uncertainty was investigated. Specifically, uncertainty in the reaction rate coefficient was considered in the analysis of the TE problem. Accordingly, the optimization problem was expanded to account for a set of different values of the reaction rate constant. Due to the complexity associated with the second scenario, the effect of uncertainty in the reaction constant was only studied for the first scenario corresponding to the optimization of the reactor section.
The results obtained from this research project show that Dynamic Programming requires a CPU time that is almost two orders of magnitude larger than that required by the methodology proposed here. Likewise, the consideration of uncertainty in a physical parameter within the analysis, such as the reaction rate constant in the Tennessee Eastman problem, was shown to dramatically increase the computational load when compared to the case in which there is no process parametric uncertainty in the analysis.
In general, the integration of design and control within the analysis resulted in a plant that is more economically attractive than that specified by solely optimizing the controllers but leaving the design of the different units fixed. This result is particularly relevant for this research work since it justifies the need for conducting simultaneous process design and control of chemical processes. Although the application of the robust tools resulted in conservative designs, the method has been shown to be an efficient computational tool for simultaneous design and control of chemical plants.
|
162 |
Simultaneous Design and Control of Chemical Plants: A Robust Modelling ApproachRicardez Sandoval, Luis Alberto January 2008 (has links)
This research work presents a new methodology for the simultaneous design and control of chemical processes. One of the most computationally demanding tasks in the integration of process control and process design is the search for worst case scenarios that result in maximal output variability or in process variables being at their constraint limits. The key idea in the current work is to find these worst scenarios by using tools borrowed from robust control theory. To apply these tools, the closed-loop dynamic behaviour of the process to be designed is represented as a robust model. Accordingly, the process is mathematically described by a nominal linear model with uncertain model parameters that vary within identified ranges of values. These robust models, obtained from closed-loop identification, are used in the present method to test the robust stability of the process and to estimate bounds on the worst deviations in process variables in response to external disturbances.
The first approach proposed to integrate process design and process control made use of robust tools that are based on the Quadratic Lyapunov Function (QLF). These tests require the identification of an uncertain state space model that is used to evaluate the process asymptotic stability and to estimate a bound (γ) on the random-mean squares (RMS) gain of the model output variability. This last bound is used to assess the worst-case process variability and to evaluate bounds on the deviations in process variables that are to be kept within constraints. Then, these robustness tests are embedded within an optimization problem that seeks for the optimal design and controller tuning parameters that minimize a user-specified cost function. Since the value of γ is a bound on one standard deviation of the model output variability, larger multiples of this value, e.g. 2γ, 3γ, were used to provide more realistic bounds on the worst deviations in process variables. This methodology (γ-based) was applied to the simultaneous design and control of a mixing tank process. Although this approach resulted in conservative designs, it posed a nonlinear constrained optimization problem that required less computational effort than that required by a Dynamic Programming approach which had been the main method previously reported in the literature.
While the γ-based robust performance criterion provides a random-mean squares measure of the variability, it does not provide information on the worst possible deviation. In order to search for the worst deviation, the present work proposed a new robust variability measure based on the Structured Singular Value (SSV) analysis, also known as the μ-analysis. The calculation of this measure also returns the critical time-dependent profile in the disturbance that generates the maximum model output error. This robust measure is based on robust finite impulse response (FIR) closed-loop models that are directly identified from simulations of the full nonlinear dynamic model of the process. As in the γ-based approach, the simultaneous design and control of the mixing tank problem was considered using this new μ-based methodology. Comparisons between the γ-based and the μ-based strategies were discussed. Also, the computational time required to assess the worst-case process variability by the proposed μ-based method was compared to that required by a Dynamic Programming approach. Similarly, the expected computational burden required by this new μ-based robust variability measure to estimate the worst-case variability for large-scale processes was assessed. The results show that this new robust variability tool is computationally efficient and it can be potentially implemented to achieve the simultaneous design and control of chemical plants.
Finally, the Structured Singular Value-based (μ-based) methodology was used to perform the simultaneous design and control of the Tennessee Eastman (TE) process. Although this chemical process has been widely studied in the Process Systems Engineering (PSE) area, the integration of design and control of this process has not been previously studied. The problem is challenging since it is open-loop unstable and exhibits a highly nonlinear dynamic behaviour. To assess the contributions of different sections of the TE plant to the overall costs, two optimization scenarios were considered. The first scenario considered only the reactor’s section of the TE process whereas the second scenario analyzed the complete TE plant.
To study the interactions between design and control in the reactor’s section of the plant, the effect of different parameters on the resulting design and control schemes were analyzed. For this scenario, an alternative calculation of the variability was considered whereby this variability was obtained from numerical simulations of the worst disturbance instead of using the analytical μ-based bound. Comparisons between the analytical bound based strategy and the simulation based strategy were discussed. Additionally, a comparison of the computational effort required by the present solution strategy and that required by a Dynamic Programming based approach was conducted.
Subsequently, the topic of parameter uncertainty was investigated. Specifically, uncertainty in the reaction rate coefficient was considered in the analysis of the TE problem. Accordingly, the optimization problem was expanded to account for a set of different values of the reaction rate constant. Due to the complexity associated with the second scenario, the effect of uncertainty in the reaction constant was only studied for the first scenario corresponding to the optimization of the reactor section.
The results obtained from this research project show that Dynamic Programming requires a CPU time that is almost two orders of magnitude larger than that required by the methodology proposed here. Likewise, the consideration of uncertainty in a physical parameter within the analysis, such as the reaction rate constant in the Tennessee Eastman problem, was shown to dramatically increase the computational load when compared to the case in which there is no process parametric uncertainty in the analysis.
In general, the integration of design and control within the analysis resulted in a plant that is more economically attractive than that specified by solely optimizing the controllers but leaving the design of the different units fixed. This result is particularly relevant for this research work since it justifies the need for conducting simultaneous process design and control of chemical processes. Although the application of the robust tools resulted in conservative designs, the method has been shown to be an efficient computational tool for simultaneous design and control of chemical plants.
|
163 |
Treatment of the Wastewater containing EDTA and Heavy Metals by Ferrite Process combined with Fenton's MethodTeng, Wan-yu 01 July 2004 (has links)
Abstract
Heavy metals and organics are always presented an important rule in the pollution control. In Taiwan, there are large amounts of toxic wastewater produced from electrical plating, metal surface-treating, steel, IC, electrics, photo-electrics, printed PC board, refinery, medicals, oil painting and foods manufactory industries. Those wastewater are contained toxic and hazardous materials materials to human body or environment quality. Thus, we believe it need immediately to develop the innovative process on removal of wastewater containing heavy metals and organic compounds.
This study uses the strong oxidation of Fenton¡¦s Process to first remove the organic pollutant, EDTA, and then uses Ferrite Process to incorporate heavy metal ions into spinel structure for facilitating removal of heavy metal ions, and through this work, the best operation model of series treatment ¡§Fenton/Ferrite Process¡¨ is established.
With respect to batch reaction in Fenton¡¦s Process, the emphasis in this work is placed the effect on EDTA removal by pH, ferrous ions concentration, and hydrogen peroxide . The results show that the best removal of EDTA occurs when Fenton is under acid condition (pH=2); and the removal of EDTA increase as the ferrous ions and hydrogen peroxide increase adequately, but when its quantity exceeds a certain value, the removal of EDTA would decrease as follows. Such a result may be caused by the excess of ferrous ions and hydrogen peroxide which could restain generation of hydroxyl radicals.
As followed the Fenton¡¦s process, Ferrite Process is next used for treatment of wastewater in series; Ferrite Process has three stages, and the operating conditions are controlled temperature and pH. For the first stage, the operating condition is 70¢J, and pH is 9.0; and the operating condition is 90¢J, and pH is 9.0 in the second stage; and the operating condition is 80¢J, and pH is 10.0 in the last stage.
From the results of series experiments, with respect to reaction time, each concentration of heavy metal in supernatant could meet the standards of discharge water when the total time of A-4 experimental condition is 90 minutes; if Hg ion is not included in wastewater, then the reaction time could be reduced to 50 minutes. I shows benefit for short reaction it the time. Under A-3 experimental condition, the reaction time is 56 minutes when Cd and Hg ions are not included in the wastewater, then each ions concentration of heavy metal could also reach the standards of discharge water, and this experiment need of ferrous ions is least of all. Thus, this experiment in this work has the economic benefits both for regarding time and cost-effectiveness.
Keywords¡GFenton¡¦s Process¡BFerrite Process¡BEDTA¡BHeavy metal
|
164 |
A Study on Treating Heavy Metal in Laboratory Waste Liquid by Ferrite processchuang, chien-kuei 08 August 2002 (has links)
Abstract
Key words: ferrite process, extend type of ferrite process,
elutriation
The way, treating waste liquid in laboratory, currently is almost the sending to treatment factory run by the local people after collecting classification. For a long time in collection and the various sources in production of the above liquid, the waste from laboratory displays not having large variation of component but also knowing hardly the true constitution. Thus, to achieve the objective of a proper treatment is not easy in truth
In this work, a ferrite process (denoted by FP) was used to develop a method that could completely treat the waste liquid of laboratory containing heavy metal in solutions. The waste liquid synthesized with ten common ions of heavy metal such as Cd, Cu, Pb, Cr, Zn, Ag, Hg, Ni, Sn and Mn for each concentration of 0.002 M, and total concentration of heavy metal mixed in solution was 0.02 M. The performance of treatment in FP was judging by that the concentration of all heavy metals in filtered solution and the heavy metal containing in sediment sludge should be both below the regulations of effluent standards and TCLP standards.
It was found that the conventional FP could not meet the goal of performance. As a result, we develop the type of extend reaction of FP to improve the performance of conventional FP. The base of theory was to maintain enough concentration of ferrous ion for beneficial reaction going continuously. We found that extend type FP accurately did improve the sludge quality to meet the TCLP standards and the plus-adding was better than the continuous-adding in two kinds of dosage input into reactor, but it would raise the operation cost for overextended reaction. Thus, we designed a wash-cleaning method to decrease the cost and to confirm a further quality of sludge in extend FP.
Base on the achievements of this study, combining the commercial technology of ion exchange, we recommended a complete flow-chart to the user or plant owner to design the treatment plant.
|
165 |
Novel visualization and algebraic techniques for sustainable development through property integrationKazantzi, Vasiliki 25 April 2007 (has links)
The process industries are characterized by the significant consumption of fresh
resources. This is a critical issue, which calls for an effective strategy towards more
sustainable operations. One approach that favors sustainability and resource
conservation is material recycle and/or reuse. In this regard, an integrated framework is
an essential element in sustainable development. An effective reuse strategy must
consider the process as a whole and develop plant-wide strategies. While the role of
mass and energy integration has been acknowledged as a holistic basis for sustainable
design, it is worth noting that there are many design problems that are driven by
properties or functionalities of the streams and not by their chemical constituency. In this
dissertation, the notion of componentless design, which was introduced by Shelley and
El-Halwagi in 2000, was employed to identify optimal strategies for resource
conservation, material substitution, and overall process integration.
First, the focus was given on the problem of identifying rigorous targets for material
reuse in property-based applications by introducing a new property-based pinch analysis
and visualization technique. Next, a non-iterative, property-based algebraic technique,
which aims at determining rigorous targets of the process performance in materialrecycle
networks, was developed. Further, a new property-based procedure for
determining optimal process modifications on a property cluster diagram to optimize the
allocation of process resources and minimize waste discharge was also discussed. In
addition, material substitution strategies were considered for optimizing both the process
and the fresh properties. In this direction, a new process design and molecular synthesis methodology was evolved by using the componentless property-cluster domain and
Group Contribution Methods (GCM) as key tools in developing a generic framework
and systematic approach to the problem of simultaneous process and molecular design.
|
166 |
Resilient engineered systems: the development of an inherent system propertyMitchell, Susan McAlpin 17 September 2007 (has links)
Protecting modern engineered systems has become increasingly difficult due to their complexity and the difficulty of predicting potential failures. With the added threat of terrorism, the desire to design systems resilient to potential faults has increased. The concept of a resilient system â one that can withstand unanticipated failures without disastrous consequences â provides promise for designing safer systems. Resilience has been recognized in research settings as a desired end product of specific systems, but resilience as a general, inherent, measurable property of systems had yet to be established. To achieve this goal, system resilience was related to an established concept, the resiliency of a material. System resilience was defined as the amount of energy a system can store before reaching a point of instability. The energy input into each system as well as the systemâÂÂs exergy were used to develop system stress and system strain variables. Process variable changes to four test systems â a steam pipe, a water pipe, a water pump, and a heat exchanger â were applied to obtain series of system stress and system strain data that were then graphed to form characteristic system response curves. Resilience was quantified by performing power-law regression on each curve to determine the variable ranges where the regression line accurately described the data and where the data began to deviate from that power-law trend. Finally, the four test systems were analyzed in depth by combining them into an overall system using the process simulator ASPEN. The ranges predicted by the overall system data were compared to the ranges predicted for the individual equipment. Finally, future work opportunities were outlined to show potential areas for expansion of the methodology.
|
167 |
Multivariate statistical monitoring and fault diagnosis of dynamic batch processes with two-time-dimensional strategy /Yao, Yuan. January 2009 (has links)
Includes bibliographical references (p. 193-208).
|
168 |
Integration of Scheduling and Dynamic Optimization: Computational Strategies and Industrial ApplicationsNie, Yisu 01 July 2014 (has links)
This thesis study focuses on the development of model-based optimization strategies for the integration of process scheduling and dynamic optimization, and applications of the integrated approaches to industrial polymerization processes. The integrated decision making approaches seek to explore the synergy between production schedule design and process unit control to improve process performance. The integration problem has received much attention from both the academia and industry since the past decade. For scheduling, we adopt two formulation approaches based on the state equipment network and resource task network, respectively. For dynamic optimization, we rely on the simultaneous collocation strategy to discretize the differential-algebraic equations. Two integrated formulations are proposed that result in mixed discrete/dynamic models, and solution methods based on decomposition approaches are addressed. A class of ring-opening polymerization processes are used for our industrial case studies. We develop rigorous dynamic reactor models for both semi-batch homopolymerization and copolymerization operations. The reactor models are based on first-principles such as mass and heat balances, reaction kinetics and vapor-liquid equilibria. We derive reactor models with both the population balance method and method of moments. The obtained reactor models are validated using historical plant data. Polymerization recipes are optimized with dynamic optimization algorithms to reduce polymerization times by modifying operating conditions such as the reactor temperature and monomer feed rates over time. Next, we study scheduling methods that involve multiple process units and products. The resource task network scheduling model is reformulated to the state space form that offers a good platform for incorporating dynamic models. Lastly for the integration study, we investigate a process with two parallel polymerization reactors and downstream storage and purification units. The dynamic behaviors of the two reactors are coupled through shared cooling resources. We formulate the integration problem by combining the state space resource task network model with the moment reactor model. The case study results indicate promising improvements of process performances by applying dynamic optimization and scheduling optimization separately, and more importantly, the integration of the two.
|
169 |
Estimation and Testing of the Jump Component in Levy ProcessesRen, Zhaoxia January 2013 (has links)
In this thesis, a new method based on characteristic functions is proposed to estimate the jump component in a finite-activity Levy process, which includes the jump frequency and the jump size distribution. Properties of the estimators are investigated, which show that this method does not require high frequency data. The implementation of the method is discussed, and examples are provided. We also perform a comparison which shows that our method has advantages over an existing threshold method. Finally, two applications are included: one is the classification of the increments of the model, and the other is the testing for a change of jump frequency.
|
170 |
BIODIESEL PRODUCTION USING SUPPORTED 12-TUNGSTOPHOSPHORIC ACID AS SOLID ACID CATALYSTS2014 December 1900 (has links)
Biodiesel has achieved worldwide recognition for many years due to its renewability, lubricating property, and environmental benefits. The abstract represents a summary of all the chapters of the thesis. The research chapters are defined as research phases in the abstract. The thesis starts with an introduction followed by literature review. In the literature review, all the necessary data were collected reviewing the literature. Then an artificial neural network model (ANN) was built based on the published research data to capture the general trends or to make predictions. Both catalyst properties and reaction conditions were trended and predicted using the network model. The review study revealed that esterification and transesterification required catalysts with slightly different properties. In the first phase of the study, biodiesel production using 12-Tungstophosphric acid (TPA) supported on SBA-15 as a solid acid catalyst was studied. In this phase of the study, a large number of 0-35% TPA on SBA-15 catalysts were synthesized by impregnation method and the effects of various operating conditions such as–catalyst wt.% and methanol to oil molar ratio on the transesterification of model feedstock Triolein were studied. A 25% TPA loading was found to be the optimum. A 4.15 wt.% catalysts (based on Triolein) and 39:1 methanol to Triolein molar ratio was found to be the optimum reaction parameter combination, when the reaction temperature was kept fixed at 200C, stirring speed of 600 rpm and 10 h reaction time. The biodiesel yield obtained using this condition was 97.2%. In the second phase of the study, a 12-Tungstophosphoric acid (TPA) was supported by using organic functional group (i.e. 3-aminopropyltriethoxysilane (APTES)) and was incorporated into the SBA-15 structure. A 45 wt.% TPA incorporated SBA-15 produced an ester with biodiesel yield of 97.3 wt.%, when 3 wt.% catalyst (based on the green seed canola (GSC) oil) and 25.8:1 methanol GSC oil molar ratio were used at 2000C for reaction time of 6.2 h. In the third phase, process sustainability (i.e. process economics, process safety, energy efficiency, environmental impact assessment) studies were conducted based on the results obtained in phase three. Based on the study, it was concluded that heterogeneous acid catalyzed process had higher profitability as compared to the homogeneous acid catalyzed process. Additionally, it was obtained that heterogeneous acid catalyzed process was safe, more energy efficient and more environment friendly than homogenous process. In the fourth phase, the catalytic activity of Tungsten oxide (WO3) and TPA supported (by impregnation) on H-Y, H-β and H-ZSM-5 zeolite catalysts were tested for biodiesel production from Green Seed Canola (GSC) oil. In this phase
iii
of the study, TPA/H-Y and TPA/H- zeolite were proved to be effective catalysts for esterification and transesterification, respectively. A 55% TPA/H- showed balanced catalytic activity for both esterification and transesterification. It yielded 99.3 wt.% ester, when 3.3 wt.% catalyst (based on GSC oil) and 21.3:1 methanol to GSC oil molar ratio were used at 200C, reaction pressure of 4.14 MPa and reaction time of 6.5 h. Additionally, this catalyst (55% TPA/H-) was experimented for etherification of pure glycerol, and maximum conversion of glycerol (100%) was achieved in 5 h at 120C, 1 MPa, 1:5 molar ratio (glycerol: (tert-butanol) TBA), 2.5% (w/v) catalyst loading. Later, these conditions were used to produce glycerol ether successfully from the glycerol derived after transesterification of green seed canola oil. A mixture of GSC derived biodiesel, and glycerol ether was defined as biofuels. In the fifth phase, catalytic activity of H-Y supported TPA (using different impregnation methods) was studied in details further for esterification of free fatty acid (FFA) of GSC oil. From the optimization study, 97.2% FFA (present in the GSC oil) conversion was achieved using 13.3 wt.% catalyst, 26:1 methanol to FFA molar ratio at 120C reaction temperature and 7.5 h of reaction time. In the sixth- and final phase, techno-economic and ecological impacts were compared between biodiesel and combined biofuel production processes based on the results obtained in phase four. Based on the study, it was concluded that, biodiesel production process had higher profitability as compared to that for combined biofuel production process. Additionally, biodiesel production process was more energy efficient than combined biofuel production process. However, combined biofuel production process was more environment-friendly as compared to that for biodiesel production process.
|
Page generated in 0.0522 seconds