• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 13632
  • 2919
  • 2576
  • 849
  • 840
  • 840
  • 840
  • 840
  • 840
  • 837
  • 491
  • 376
  • 273
  • 148
  • 82
  • Tagged with
  • 27727
  • 13342
  • 4651
  • 3551
  • 2152
  • 2063
  • 1266
  • 1170
  • 1049
  • 1046
  • 1041
  • 1031
  • 1020
  • 994
  • 989
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
171

Biosorption of Hazardous Organic Pollutants

Bell, Paul John 12 1900 (has links)
<p>A study of the sorption of hazardous organic pollutants by live and dead microbial biomass (biosorption) has been made. Biosorption of lindane, pentachlorophenol, diazinon, 2-chlorobiphenyl, and malathion by Rhizopus arrhizus and activated sludge were investigated. Malathion was found to be removed by a chemical decomposition process when contacted with dead biomass. The other compounds were observed to be sorbed by the biomuss, and the sorption process was found to be reversible. The biosorption isotherms could be represented by the Freundlich equation and were found to be nearly linear over the range of concentrations examined. The biosorptive uptake is positively correlated with the octanol/water partition coefficient for the compounds. Heats of sorption were estimated and indicate that the biosorption process involves a physicaI rather than a chemical mechanism. The biosorption phenomenon appears to involve both surface adsorption and absorption into the cell interior. Biosorptive uptake generally appears not to be strongly affected by competition from other sorbing compounds. The kinetics of biosorption of lindane are characterized by a rapid initial uptake followed by a slower accumulation process. In general, live and dead biomass were found to exhibit a different level of biosorptive uptake, however no generalizations could be made concerning the direction or magnitude of the differences. The order of magnitude of removal of non-biodegradable hazardous compounds in biological treatment plants can be predicted from the biosorption isotherms.</p> / Doctor of Philosophy (PhD)
172

Dynamic Modelling for Control of High Rate Anaerobic Wastewater Treatment Processes

Jones, Richard M. January 1992 (has links)
<p>The overall goal of this research was to develop an improved understanding of the dynamic behavior necessary to define monitoring and control requirements of high rate anaerobic biological wastewater treatment processes. This required generation of an extensive non-steady state data set.</p> <p>Data were collected during dynamic experiments on a 77 L pilot scale anaerobic fluidized bed, operated at the Wastewater Technology Centre in Burlington, Ontario. Experiments consisted of 12 and 36 hour pulse inputs of substrates specific to the distinct groups of microorganisms present in the process, and ranged from 7 to 21 days in length. The methane and carbon dioxide production rates, biogas hydrogen content, pH, effluent concentrations of volatile acid intermediates and chemical oxygen demand (COD) were measured using on-line instrumentation or laboratory analysis of discrete samples.</p> <p>Experimental results demonstrated the relative importance of substrate and product inhibition on various reaction steps in the process, and indicated that he short term dynamic response can change significantly over time. The gas phase hydrogen concentration, previously proposed as an indicator of process stability, was found to have limited utility as a monitoring variable. The loss in potential methane production and the increased chemical requirements for pH control were shown to represent a significant cost during an upset. This suggests that in addition to the environmental incentive, an economic incentive exists to maintain stable operation of the process.</p> <p>A mechanist, four -bacterial population dynamic model of the process was formulated. Due to lack of suitable mechanistic models for bacterial concentrations, the model was only able to predict the short term dynamic response. An extended Kalman filter was used to combine the four population model with stochastic models for the bacteria concentration states. A sensitivity analysis was required to select a subset of parameters and stochastic states for estimation.</p> <p>The extended Kalman filter allowed the model to track the measured states, although in some cases this tracking could only be achieved through unrealistic adjustments of the bacterial concentration states. The time-variable behavior of the estimated stochastic states indicated a number of potential model improvements.</p> / Doctor of Philosophy (PhD)
173

Emulsion Polymerization Kinetics with Crosslinking

Guo, Hua 12 1900 (has links)
<p>Emulsion polymerization of vinyl/ldivinyl monomers may result in crosslinked polymers and lead to peculiar characteristics for the kinetics and network structure. Herein is provided a comprehensive study including both extensive experimentation and computer modeling with the aim of elucidating the effects of crosslinking on emulsion polymerization kinetics.</p> <p>The model monomer system chosen for the present study was methyl methacrylate (MMA) and ethylene glycol dimethacrylate (EGDMA) as the comonomer and crosslinker. The polymerization temperature was 50°C. Potassium persulfute (KPS) was the initiator. Sodium dodecyl sulphate (SDS) as emulsifier was used below and above the critical micelle concentration(CMC).</p> <p>The monomer conversion, polymer particle number and size development, swellability of crosslinked polymer particles, pendant double bond (PDB) conversion, the glass transition temperature (Tg) and the internal heterogeneity in polymer particles were measured as a function of time in a batch reactor. It has been found that EGDMA level in the monomer feed and the initiator concentration have a pronounced effect on the behavior of the polymerization process. The experimental responses were very different when polymerizations were done below and above the critical micelle concentration.</p> <p>ESR was used to measure the dramatic radical concentration increase which is particularly great at higher levels of crosslinking. Time profiles of propagating radical concentrations have two regions. A relatively constant radical concentration region coupled with high rates of monomer conversion and polymer particle generation and a dramatic radical concentration increase coupled with a level off in monomer conversion and polymer particle concentration. These observations suggest that a trapping of radicals within the crosslinked polymer network occurs during emulsion polymerization an this is further confirmed by ESR testing of solid polymer samples coupled with DSC testing of residual PDB in the same samples.</p> <p>The second part of this study features kinetic modeling and the simulation of the kinetic equations by the Monte-Carlo method. An existing kinetic model for crosslinking density distribution has been revised to acCOWlt for the shielding effect on pendant double bond reactivity. In addition, a stochastic Monte-Carlo simulation algorithm for emulsion polymerization has been developed. After basic testing, the program has been applied to emulsion homopolymerization with long chain branching and copolymerization with or without branching/crosslinking. Calculated results clearly reflect the kinetic phenomena when crosslinking is relevant In the present study, Monte-Carlo simulation has been used to simulate kinetic behavior in stage II where polymer particle concentration is constant. However, the algorithm can readily be extended to include the nucleation period (stage I) and the finishing stage (stage III) and to consider a host of kinetic event combinations provided that the proper kinetic expressions, constraints and criteria conditions are available.</p> / Doctor of Philosophy (PhD)
174

Finite Element Analysis of Layer Non-Uniformity in Polymer Coextrusion Flows

Riquelme, Torres Agustin 05 1900 (has links)
<p>Coextrusion is an important polymer processing technology that consists of combining layers of different materials in their molten state to form a uniform structure. The final product has the combined properties of the individual components at a cost that is a fraction of lamination. Two basic problems have been defined in polymer coextrusion: the temporal instability of the polymer/polymer interface, and the problem of maintaining a uniform layer and thickness distribution across the die in steady state operation. This work is concerned with the latter problem class. Layer non-uniformity is a distortion of the internal interface that can be seen as a migration or displacement of the more viscous fluid by the less viscous one.</p> <p>A three-dimensional finite element code was developed to study fundamental issues in the layer non-uniformity problem. One area of study is the contact line region (where the fluid/fluid interface meets the wall). Use of the classical no-slip condition leads to a mathematical singularity. In the past, this was circumvented by extrapolation methods. In this work, a localized slip model allowed a detailed study of the interface behavior near the wall. Also, a rigid (or "stick") contact line was used to explain certain experimentally observed interface shapes.</p> <p>Based on experimental evidence, two distinct interface deformation mechanisms are proposed: Type I shows a displacement of the more viscous fluid by the less viscous one, with a slippage of the contact line. In Type II, there is an intrusion of the less viscous fluid into the more viscous fluid. Type I and II deformations can be captured using the 3-D thermal model formulation described in this work, but the shape depends on the interplay between the material and die parameters. The value of the slip coefficient at the contact line is important and may be a fundamental characteristic of the particular polymer melt I polymer melt I die material system.</p> <p>A detailed study of the effect of thermal and geometrical parameters on the final interface shape was conducted. It was found that the thermal dependence of viscosity plays the most important role in determining the final interface shape as compared against other material parameters. The effect of this parameter is more pronounced near the die walls, where the viscous dissipation is higher. The code shows excellent agreement with experimental data for polycarbonate/polycarbonate extrusion systems.</p> <p>An alternative analysis for the three-dimensional flow of viscoelastic flow in ducts was presented as an attempt to explain experimentally observed complicated interface shapes that cannot be captured by a Generalized Newtonian Fluid model. A modified Phan-Thien Tanner model and a second-order model are used in conjunction with a space-marching finite element code based on the Parabolized Navier-Stokes equations to study the secondary flows in channels.</p> / Doctor of Philosophy (PhD)
175

Polyurethane Surfaces Bearing Immobilized Thrombin Binding Agents: Preparation and Interactions with Plasma

Tian, Yuan January 1995 (has links)
<p>Novel polyurethanes bearing O-phe-pro-arg chloromethylketone (PPACK), sulphonate groups and polyethylene oxide were synthesized using block copolymerization and chain-grafting methods. The structure and the PPACK attachment mode of the PPACK-bearing polyurethane were studied by 2D NMR. The distribution of the functional groups on the corresponding biomaterial surfaces was studied by XPS. PPACK was grafted onto the polyurethane chains in such a way that its bioactivity (interactions with thrombin) was retained. The protein layers deposited from plasma onto the different polymer surfaces were studied by SDS-PAGE in conjunction with immunoblotting methods. The initiation and generation of active coagulation factors in plasma contacting different polymer surfaces were investigated by chromogenic substrate methods using specific oligopeptide substrates. The polyurethane bearing PPACK residues effectively inhibited the activation and generation of factor Xlla and thrombin in plasma contacting the surface, and prevented the clotting of the plasma. This much improved in vitro biocompatibility indicated the validity of using non-heparin antithrombin agents to prevent surface induced coagulation, and it showed that the PPACK-based polymers have potential as novel biomaterials to improve the biocompatibility of current medical devices such as small diameter vascular prostheses, hemodialyzers, stents, and catheters.</p> / Doctor of Philosophy (PhD)
176

Using external information for statistical process control

Yoon, Seongkyu 09 1900 (has links)
<p>This research presents practical solutions for several problems in the case of multivariate statistical process control (MSPC) for process monitoring and fault diagnosis of industrial processes. Various types of external information are used with multivariate statistical models to deal with fault detection and isolation (FDI) problems. The first part of the research deals with the fault isolation problem. It is based on the fact that MSPC approaches have been found to be very useful for fault detection, but less powerful for fault isolation because of the non-causal nature of the data. To improve fault isolation, an approach is proposed that uses additional data on past faults to supplement existing contribution plot methods. This approach extracts steady-state fault signatures of faults in both the correlation model space and the residual space. The use of transient fault trajectory or initial fault signature would minimize detection delay and false isolation of a fault. An indirect usage of process dynamic information to deal with FDI problems is proposed in the second part of the work. Since process signals represent the cumulative effects of many underlying process phenomena, multiresolution analysis via wavelet transformations is used to decompose signals. The third part of this thesis examines both the fundamental and the practical differences between the causal and statistical model-based approaches to FDI. The causal-model-based approach is based on causal state variable or parity relation models developed from theory or identified from plant test data. Faults are then detected and isolated with structured or directional residuals from these models. MSPC approaches are based on non-causal models built with multivariate latent variable methods using historical process data. Faults are then detected by referencing future data against these covariance models, and isolation is attempted through examining contributions to the breakdown of the covariance structure. Most processes are subject to change of the operating conditions such as feed rate and composition, product grade, controller status, and so on. Sometimes these large common-cause variations disguise or distort the relevant information to faults. This difficulty can be minimized if one builds a correlation model that includes only the process variations relevant to FDI. By incorporating various types of prior knowledge into the empirical model building process, one can estimate a hybrid correlation model that includes both raw data and prior knowledge. Application of the hybrid correlation model for process monitoring and fault diagnosis is proposed and used for analyzing a real industrial dataset. (Abstract shortened by UMI.)</p> / Doctor of Philosophy (PhD)
177

Statistical Analysis and Adjustment of the Mass/Energy Balances of Processes

Campos, Alicia Garcia Yuraima 06 1900 (has links)
<p>An improved technique for data adjustment to satisfy mass/energy balances for single or multiple processing unit systems has been developed. The method was successfully tested on examples from the literature. With this technique, biased data can be detected and sources of error identified and isolated.</p> <p>A multivariate normal distribution for the measurement vector x, was assumed and used to derive a test function vector w, with univariate normal distribution. A chi-square test on w is proposed for detecting biased data. An algorithm for identifying and isolating sources of bias was developed, using standard normal two-sided tests on each element of vector w and using the imbalance of the constraints, e.</p> <p>In the absence of bias, adjusted values of the measurement vector and/or the estimates of missing variables can be obtained. When chemical reactions take place, the extents of the independent reactions are also estimated.</p> <p>The main requirements of this technique are the knowledge of the process constraints and measurement statistics.</p> <p>Implementation of the technique is straightforward and it is suggested that the method be used as a diagnostic aid in process analysis.</p> / Master of Engineering (ME)
178

Multivariate Statistical Regression Methods for Process Modelling and Experimental Design

Dayal, Singh Bhupinder 06 1900 (has links)
<p>In this thesis, various multivariate statistical regression methods are investigated for estimating process models from the process input-output data. These identified models are to be used for designing model based controllers and experimental optimisation of multivariate processes. The following issues are explored: (i) identification of finite impulse response models for model based control; (ii) multi-output identification tor multivariate processes; (iii) recursive updating of process models for adaptive control and prediction; and (iv) experimental design in latent variables for high dimensional systems.</p> <p>In the first part of the thesis, various approaches to identifying non-parsimonious finite impulse response (FIR) models are compared on the basis of closeness of fit to the true process, robust stability provided by the resulting model, and the control performance obtained. The major conclusion by all assessments is that obtaining FIR models by first identifying low order transfer function models by prediction error methods is much superior to any of the methods which directly identitY the FIR models.</p> <p>In the second part, the potential of multi-output identification for multivariate processes is investigated via simulations on two process examples: a quality control example and an extractive distillation column. The identification of both the parsimonious transfer function models using multivariate prediction error methods and of non-parsimonious FIR models using multivariate statistical regression methods such as two-block partial least squares (PLS2), canonical correlation regression (CCR), reduced rank regression (RRR) are considered. The multi-output identification methods provide better results when compared to the single-output identification methods based on essentially all comparison criteria. The benefits for using multi-output identification are most obvious when there are limited amount of data and when the secondary output variables have better signal to noise ratios.</p> <p>In the third part of this thesis, an improvement to the PLS algorithm is made. It is shown that only one of either the X or the Y matrix needs to be deflated during the sequential process of computing latent vectors. This result then leads to two very fast PLS kernel algorithms. Using these improved kernel algorithms, a new and fast recursive, exponentially weighted PLS algorithm is developed. The recursive PLS algorithm provides much better performance than the recursive least squares algorithm when applied to adaptive control of a simulated 2 by 2 multivariable continuous stirred tank reactor and updating of a multi-output prediction model for an industrial mineral flotation circuit.</p> <p>Finally, a design methodology similar to the evolutionary operation (EVOP) and the response surface methodology (RSM) for optimisation of high dimensional system is proposed. A variation of the PLS algorithm, called selective PLS, is developed. It can be used to analyse the process data and select meaningful groupings of the process variables in which the EVOP/RSM experiments can be performed.</p> / Doctor of Philosophy (PhD)
179

Control relevant model identification with prior knowledge

Esmaili, Ali 07 1900 (has links)
<p>The use of prior or accumulated knowledge for the identification of multivariable models that will generally assure the stability of multivariable model based controller designs is investigated. The systems that are considered in this study are by and large linear, time invariant, ill-conditioned, and multi-input multi-output (MIMO) in nature. The effect of different types of prior information on controller stability is studied. It is shown that use of some types of prior knowledge may improve the model quality in terms of the stability of the resulting closed-loop system, while use of other types of prior knowledge may degrade the model quality. Prior knowledge that provides information about the low-gain direction of the process has the most significant effect on controller stability. Several issues associated with incorrect prior knowledge and the sensitivity of controller stability to such an error are also considered. This leads to checkable metrics that can be used by practitioners to evaluate the sensitivity of the controller to given prior knowledge before controller implementation. The issue of model maintenance (that is re-identification of existing models) that will result in improved controller stability in MIMO controllers is then addressed. Posterior knowledge about existing controller performance can be used to re-estimate models. Two novel controller designs result from this study: a multi-model style controller, and an adaptive style controller. Finally, issues regarding closed-loop identification of single-input single-output systems are considered. In particular, it is shown that the direct method of closed-loop identification results in an improved model quality compared to 2-step methods of closed-loop identification.</p> / Doctor of Philosophy (PhD)
180

Studies in Data Reconciliation Using Principal Component Analysis

Tong, Hongwei 08 1900 (has links)
<p>Measurements such as flow rates from a chemical process are inherently inaccurate. They are contaminated by random errors and possibly gross errors such as process disturbances, leaks, departure from steady state, and biased instrumentation. These measurements violate conservation laws and other process constraints. The goal of data reconciliation is to resolve the contradictions between the measurements and their constraints, and to process contaminated data into consistent information. Data reconciliation aims at estimating the true values of measured variables, detecting gross errors, and solving for unmeasured variables.</p> <p>This thesis presents a modification of a model of bilinear data reconciliation which is capable of handling any measurement covariance structure, followed by a construction of principal component tests which are sharper in detecting and have a substantially greater power in correctly identifying gross errors than the currently used statistical tests in data reconciliation. Sequential Analysis is combined with Principal Component Analysis to provide a procedure for detecting persistent gross errors.</p> <p>The concept of zero accumulation is used to determine the applicability of the established linear/bilinear data reconciliation model and algorithms. A two stage algorithm is presented to detect zero accumulation in the presence of gross errors.</p> <p>An interesting finding is that the univariate and the maximum power tests can be quite poor in detecting gross errors and can lead to confounding in their identification.</p> / Doctor of Philosophy (PhD)

Page generated in 0.0501 seconds