• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 6517
  • 406
  • 279
  • 279
  • 279
  • 279
  • 279
  • 279
  • 215
  • 184
  • 101
  • 10
  • 10
  • 6
  • 4
  • Tagged with
  • 9703
  • 9703
  • 898
  • 804
  • 798
  • 787
  • 786
  • 786
  • 721
  • 680
  • 457
  • 449
  • 283
  • 281
  • 256
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.

Free Radical Copolymerization Kinetics of Styrene/Divinylbenzene

Vivaldo-Lima, Eduardo 08 1900 (has links)
<p>An effective model for the bulk, solution and suspension copolymerization of styrene/divinylbenzene (OVB) has been developed. Its effectiveness is understood as a compromise between sound theoretical basis and simple mathematical structure, which makes possible the solution of its governing equations using conventional computational tools.</p> <p>To build the model a comprehensive analysis of the elementary reactions and restrictions caused by the physical environment imposed by the growing polymer was made. The main issues considered in the model are: diffusion-controlled initiation, propagation and bimolecular termination reactions; different reactivities of double bonds; effect of solvent, chain transfer agent, inhibitor, type of crosslinker (m-DVB, p-DVB or mixtures of both), type of initiator; as well as crosslinking and primary-secondary cyclization reactions.</p> <p>In building the model, it was necessary to review and improve the conventional theory of diffusion controlled free radical polymerization kinetics. Important contributions in this area came out as a result. Among those contributions, the most important are: the demonstration that using a "parallel" approach for modelling effective kinetic constants which are "diffusion-controlled" (widely used in this field) is incorrect, the proposal of an effective way to calculate molecular weight averages, and the proposal of a model for calculation of non-equilibrium free volume.</p> <p>By performing a detailed compilation and analysis of experimental information available in the literature, all our objectives could be satisfactorily accomplished without having to perform additional experiments. Most of the experimental data and model predictions are in excellent agreement for pre and post-gelation periods. However, it is recognised that the real behavior of the polymerizing system is so complex (even the experimental techniques for characterization of network polymers are still in the development stage), that the model developed herein must be considered as a first realistic approximation to the real situation. Some guidelines for the improvement of this model (mostly associated to secondary cyclization) and preliminary qualitative calculations associated to these modifications have been given.</p> / Master of Engineering (ME)

Flocculation of Metal Hydroxides with Polymers - Optimization and Kinetic Modelling

Huck, Melvin Peter 03 1900 (has links)
<p>This study determined optimum conditions for the use of polymers in flocculation of metal hydroxides present in neutralized mine drainage. Using the criterion of supernatant particulate metal concentration following flocculation and settling, optimum polymer properties and mixing conditions were obtained for both strong and weak simulated minewaters containing iron. It was shown that these conditions also provided minimum metal residuals (less than 0.3 mg/l) for simulated minewaters containing copper or zinc. For two-metal systems, the residual particulate metal concentrations achieved were lower than for the corresponding single-metal systems.</p> <p>Near the optimum, mixing conditions were more important than polymer properties. Optimum mixing times decreased greatly as minewater strength increased, as would be expected from kinetic considerations. Optimum mixing speeds increased moderately with increasing initial metal concentration, probably because of a corresponding increase in floc strength. Polymer molecular weight had no effect over the range investigated. The polymer degree of hydrolysis was unimportant in the range from 2 to 35 percent for the three metals at various initial concentrations. The optimum polymer dosage was not narrowly defined, but could be related to the minewater strength as 1.7 x 10‾³ the initial metal concentration.</p> <p>Experiments performed to determine the best kinetic model for the process demonstrated that it could not be represented by a simple aggregation model and that none of the available models incorporating floc breakup were adequate. A new model was proposed which incorporated for the first time the decrease in the aggregation rate with time because of the shortening of the absorbed polymer loops, and the existence of a critical mixing intensity for floc breakup. This model, which is second order in particle concentration, was found to predict satisfactorily the results obtained with the simulated iron, copper and zinc minewaters. Although the aggregation rate was greater for zinc than for iron or copper, the achievable supernatant particulate concentration was indepedent of metal type. The critical mixing intensity for floc breakup was found to increase with increasing initial metal concentration.</p> <p>The model was tested on three actual minewaters and was found to give satisfactory predictions.</p> / Doctor of Philosophy (PhD)

Oxidation of o-xylene in an integral packed bed reactor

Adegbesan, Olutunmbi Kehinde 10 1900 (has links)
<p>The oxidation of o-xylene was investigated in an integral packed bed reactor using a K<sub>2</sub>SO<sub>4</sub> promoted vanadium pentoxide catalyst on a TiO<sub>2</sub> support.</p> <p>Reaction products consisting of nine chemical components were analyzed by a new temperature-programmed gas chromatographic technique using three different columns to effect separation.</p> <p>A kinetic model based on the REDOX (catalyst reduction and oxidation) mechanism was employed. The statistical method of experimental design for parameter estimation based on M.J. Box's modification [116] of the Draper and Hunter method [47] was used. Statistical analysis based on the eigenvalue-eigenvector method of Box et al. [45] indicated correlation among the responses and this method was used to transform the multiple response data for use in estimating the parameters in the kinetic model.</p> <p>Since parameter estimates in the model were available from Wainwright's previous work [26], the method of Hoffman and Reilly [124] which is based on Bayes' theorem, was used to transfer this prior infonnation on the parameters to the present experimental system.</p> <p>The kinetic data obtained from using the new chromatographic technique developed in this study were found to be consistent. The kinetic model of Wainwright [26] for this reaction system was fitted to the multiple response data obtained from this study. The adequacy of this model in representing the. data was also tested. The use of the statistical teclmiques in experimental programs to develop kinetic models was found to be extremely effective. Some of the difficulties in using them are outlined.</p> / Master of Engineering (ME)

Mechanisms of Filler Flocculation with PEO/Cofactor Dual-component Flocculants

Lu, Chen 04 1900 (has links)
<p>High molecular weight poly(ethylene oxide) (PEO) is used in papermaking as a flocculant to incorporate fines and fillers into papers. PEO flocculation is more effective in the presence of cofactors, which are phenolic polymers capable of forming aqueous complexes with PEO. In this work, two types of model cofactors were developed to study the PEO/cofactor flocculation mechanism. The first type of cofactors were latex particles with polystyrene-core poly(vinyl phenol)-shell (PS-PVPh) prepared by the surfactant-free emulsion polymerization. It was found that PS-PVPh particles enhanced the ability of PEO to flocculate polystyrene latex. When composite particles were added after PEO, they bridged together PEO-coated particles and aggregates. When composite particles were added after PEO, they bridged together PEO-coated particles and aggregates. When composite particles were added before PEO, they acted as bridging agents and adsorb PEO in an extended configuration ideal for flocculation.</p> <p>The second type of cofactors developed were water soluble tyrosine-containing polypeptides (TCP). TCP cofactors had well-defined structures and were optically active. Therefore, many analytical techniques, such as NMR, light scattering, isothermal titration calorimetry, and circular dichroism, were able to be applied to study the PEO/TCP complex formation mechanism. By taking into account the complex formation mechanism, the complex bridging flocculation mechanism of Xiao et al. was extended to explain many flocculation mechanisms. According to the extended mecahnism, PEO/TCP complexes function as bridges to couple filler particles. Meanwhile, the complexes undergo deactivation - this is a new concept developed in this work. The deactivated complexes lose the ability to couple fillers, preventing the flocculation from reaching completion. It was proposed that the deactivation was induced by the encapsulation of TCP phenolic groups by PEO.</p> / Doctor of Philosophy (PhD)

Steady-State and Dynamic Behaviour of Combined and Separate Sludge Carbon Removal-Nitrification Systems

Sutton, Matthew Paul 09 1900 (has links)
<p>This dissertation examines the degree of nitrification which can be accomplished in combined and separate activated sludge systems over a temperature range of 5º to 25ºC and a system solids residence time range of 4 to 10 days under both steady and non-steady operating conditions.</p> <p>Treating municipal sewage under steady flow conditions, it was found that the rate of nitrification was independent of the concentration of filterable TKN or ammonia. Temperature and solids retention time significantly affected filterable TKN removal. The degree of nitrification obtained in both combined and separate sludge systems was comparable.</p> <p>The parallel pilot plant systems were subjected to a number of non-steady influent conditions. The responses to a pulse change in influent pH and a step-down in temperature indicated that the separate sludge system had a greater capacity to withstand such conditions. Transfer function models, together with time series models, were developed to describe the dynamic responses of the nitrifying systems to changes in influent flow, and organic carbon and inorganic nitrogen concentration. The observed and model results indicated greater effluent filterable TKN variation can be expected from nitrifying systems operated under variable flow and concentration inputs than for variable concentration inputs alone.</p> / Doctor of Philosophy (PhD)

Quality Control for Batch Processes Using Multivariate Latent Variable Methods

Flores-Cerrillo, Jesus 09 1900 (has links)
<p>The main goal in many processes is to obtain consistent and reproducible operation and end-quality properties. In this thesis the problem of product quality control in batch and semi-batch processes is addressed. Unlike from much of the published literature that uses first principles models, this thesis studies the end-quality feedback control problem using only empirical Partial Least Squares (PLS) models. Several simple, practical and effective regulatory control strategies are proposed.</p> <p>The thesis consist of four main chapters: i) On-line control of a distributed end quality property (particle size distribution, PSD) using mid-course correction strategies (MCC), ii) an inferential-adaptive control approach that combines on-line and batch-to-batch control, iii) a novel reduced dimensional space control algorithm to obtain complete manipulated variable trajectories (MVT) consistent with past operation, and iv) incorporation of prior batch-to-batch information for batch analysis and monitoring.</p> <p>In the first section, three on-line empirical MCC strategies are proposed for the control of bimodal PSDs in emulsion polymerization systems. The performance of the control strategies is evaluated using a detailed theoretical simulator. Control is applied only when the predicted properties falls outside a statistically defined "no-control" region. Each control strategy corresponds to a control objective: i) Control of second mode of the distribution, ii) control of the full bimodal PSDs and iii) control of relative distributions. Advantages and disadvantages of each one of the control strategies are discussed.</p> <p>In the second part a combined on-line and batch-to-batch control strategy is presented. The approach extends MCC strategies used before to include multiple decision and correction points, batch-to-batch information to reject batch-wise correlated disturbances, and an adaptive PLS approach to update the models from batch-to-batch to overcome model error, changing process conditions and unknown disturbances. The methodology is also illustrated with the control of PSD in emulsion polymerization. The problem of regulation about a fixed set-point PSD in the face of disturbances, and the problem of achieving new set-point PSDs are both illustrated.</p> <p>In the third part a novel strategy for controlling end-product quality properties by solving on-line for complete MV trajectories, for the remainder of the batch, is presented. Control through the optimal solution for complete trajectories using empirical models is achieved by performing the model inversion and the MVT reconstruction in the reduce space of a latent variable model. The approach is illustrated with a condensation polymerization example for the production of nylon and with data gathered from an industrial emulsion polymerization process.</p> <p>In the last section an extension of the multi-block multiway Principal Component Analysis (MPCA) and MPLS approaches is introduced to explicitly incorporate batch-to-batch trajectory information. It is shown that the advantage of using information on prior batches for analysis and monitoring is often small. However it can be useful for detecting problems when monitoring new batches in the early stages of their operation. The approach is illustrated using condensation polymerization and emulsion polymerization systems as examples.</p> / Doctor of Philosophy (PhD)

Development of Vision-Based Inferential Sensors for Process Monitoring and Control

Yu, Honglu 04 1900 (has links)
<p>This thesis develops inferential sensors for on-line process monitoring and control based on multivariate image analysis. Several methodologies based on multivariate statistical methods, such as Principal Component Analysis (PCA) and Partial Least Squares (PLS), are developed to efficiently extract information in real-time from time varying images and to predict process and product properties. The inferential sensors developed based on these methodologies are showen to be sufficient for on-line monitoring and feedback control as illustrated through two industrial applications: snack food processes and flame monitoring. These methods can be easily extended and used for a wide variety of other on-line monitoring and control problems.</p> <p>In the snack food applications features are extracted from RGB color images and used in predicting the average coating concentration on the product and the coating coverage distribution over the product pieces. Data collected using both on-line and off-line imaging from several different snack food product lines are used to develop and evaluate the approaches. Results of successful monitoring and control from the on-line applications to the industrial processes are shown. Several robustness issues such as detection of model inadequacy and on-line correction are also discussed. Preliminary results on the prediction of organileptic properties (taste and texture of snack foods) are shown as well.</p> <p>In the flame application, an on-line digital imaging system is developed for monitoring a turbulent nonpremixed flame in an industrial boiler. By using PCA score plots stable information can be obtained for highly fluctuating flame images. A feature extraction approach is proposed to extract the information from the flame color images. The information extracted from the images is then used to successfully predict the performance of the boiler system, such as the energy content of the fuel, and the concentration of NOx and SO2 emissions in the off-gas using PLS regression. Results show that flame images contain a large amount of information that is useful for monitoring the performance of the boiler system. The approach is very general and can be applied to a wide range of combustion processes.</p> <p>A general framework for building vision-based inferential sensors for monitoring and control of process and product properties using multivariate image analysis is presented. The applications to the snack food processes and the flame monitoring system are shown to fit into this general framework.</p> / Doctor of Philosophy (PhD)

Thermal Conductivity of Low Conductivity Solids - A Transient Method Using Spherical Geometry

Archambault, Raynald 06 1900 (has links)
Master of Engineering (ME)

Solution Polymerization of Acrylamide to High Conversion

Ishige, Toshiyuki 06 1900 (has links)
<p>This thesis reports an experimental investigation of the free radical polymerization of acrylamide in water (Part I) and the development of gel permeation chromatography (GPC) technology for the measurement of molecular weight distribution and conversion (Part II). The aim of Part I was to develop a kinetic model for the polymerization capable of predicting conversion and molecular weight distribution up to high conversion with the production of polymer of number-average molecule weight over one million. The aim of Part II was to develop numerical techniques required for instrumental spreading correction in GPC data interpretation. A further aim was to experimentally investigate the feasibility of molecular weight distribution and conversion analysis of polyacrylamide in aqueous carrier solvent.</p> / Doctor of Philosophy (PhD)

Dynamic Simulation of Large Stiff Systems in a Modular Simulation Framework

Barney, Robert James 05 1900 (has links)
<p>"DYNSYS" is a digital simulation package for modelling the dynamic behaviour and automatic control of complex industrial systems. An acronym for Dynamic Systems Simulator, it was developed in the late 1960s in the chemical engineering department at McMaster University, The underlying principle of "DYNSYS" is that of modularity; i.e., the user assembles mathematical models of process units and control devices to build the process whose transient behaviour is to be studied. A fundamental aspect of dynamic simulation is the numerical solution of ordinary differential equations (o.d.e.s). The original version of "DYNSYS" used a third order Adams-Moulton-Shell routine; however, this is not sufficient to handle stiff systems, i.e., systems where the time constants differ greatly in magnitude. In chemical engineering, stiff o.d.e.s occur widely in reaction kinetics and to some extent in multistage systems.</p> <p>Conventional numerical techniques are restricted by stability to using a very small step size resulting in large computer times. There have been many new numerical techniques published in the recent literature directed at the efficient numerical solution of systems of stiff o.d.e.s. A literature survey of these has been made.</p> <p>Numerical testing of several methods indicated Gear's method to be superior. It is a variable order, variable step, linear, multistep method.</p> <p>Most stiff techniques are implicit and require a technique such as Newton-Raphson iteration to converge. Each iteration involves the solution of a system of linear algebraic equations (usually sparse) equal in size to the number of o.d.e.s. For a large stiff system, this requires considerable computer time. Various sparse linear equation solvers have been evaluated and that a Bending and Hutchinson appears to be the most efficient. Their routine stores and operates on only the nonzero elements of the equations. When the equations are solved for the first time, a string of integers called the "operator list" is created which stores the particular solution process by Gaussian elimination. If the system is re-solved using the operator list, the amount of computer time required is greatly diminished. If the zero elements remain zero and the nonzero elements change, the same "operator list" can be used to solve the new system. This is essentially what occurs during the numerical integration. The operator list could be set up on the first integration step and used on later steps to solve each new linear system.</p> <p>Gear's integration algorithm in conjunction with the Bending-Hutchinson linear equation solve has been implemented into DYNSYS version 2.0. An option for stiff systems with tridiagonal Jacobian matrix is also included. The procedure for writing module is outlined.</p> <p>Four small examples are presented to illustrate the new executive:</p> <p>(1) The level control of a stirred tank system (nonstiff) with time delay.</p> <p>(2) A network of 15 stirred tank reactors, stiff and nonstiff, 2 o.d.e.s per reactor.</p> <p>(3) A tubular reactor with 222 stiff o.d.e.s. resulting from the discretization of the partial differential equations.</p> <p>(4) A tubular reactor with 49 stiff o.d.e.s with tridiagonal Jacobian matrix.</p> <p>A simulation of a fictitious chemical plant proposed by Williams and Otto is also described.</p> / Doctor of Philosophy (PhD)

Page generated in 0.4386 seconds