• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 99
  • 83
  • 37
  • 12
  • 9
  • 8
  • 7
  • 6
  • 5
  • 5
  • 2
  • 2
  • 1
  • Tagged with
  • 296
  • 296
  • 107
  • 74
  • 66
  • 63
  • 48
  • 43
  • 41
  • 39
  • 38
  • 31
  • 26
  • 25
  • 25
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
131

Mathematical Modeling of Carbon Removal in the A-Stage Activated Sludge System

Nogaj, Thomas 01 January 2015 (has links)
This research developed a dynamic activated sludge model (ASM) to better describe the overall removal of organic substrate, quantified as chemical oxygen demand (COD), from A-stage high rate activated sludge (HRAS) systems. This dynamic computer model is based on a modified ASM1 (Henze et al., 2000) model. It was determined early in the project that influent soluble COD, which is normally represented by a single state variable in ASM1, had to be subdivided into two state variables (SBs and SBf, or slow and fast fractions) to simulate the performance of A-stage systems. Also, the addition of state variables differentiating colloidal COD from suspended COD was necessary due to short hydraulic residence times in A-stage systems which do not allow for complete enmeshment and bioflocculation of these particles as occurs in conventional activated sludge systems (which have longer solid retention times and hydraulic retention times). It was necessary to add several processes (both stoichiometry and kinetic equations) to the original ASM1 model including heterotrophic growth on both soluble substrate fractions and bioflocculation of colloidal solids. How to properly quantify heterotrophic growth on SBs and SBf resulted in two separate approaches with respect to process kinetic equations. In one approach the SBf was metabolized preferentially over SBs which was only utilized when SBf was not available. This is referred to as the Diauxic Model. In the other approach SBf and SBs were metabolized simultaneously, and this is referred to as the Dual Substrate Model. The Dual Substrate Model calibrated slightly better than the Diauxic Model for one of the two available pilot studies data sets (the other set was used for model verification). The Dual Substrate A-stage model was used to describe the effects of varying specific operating parameters including solids retention time (SRT), dissolved oxygen (DO), influent COD and temperature on the effluent COD:N ratio. The effluent COD:N ratio target was based on its suitability for a downstream nitrite shunt (i.e. nitritation/denitritation) process. In the downstream process the goal is to eliminate nitrite oxidizing bacteria (NOB) from the reactor while selecting for ammonia oxidizing bacteria (AOB). The results showed that a low SRT (< 0.25 d) can produce high effluent substrates (SB and CB), and elevated COD:N ratios consistent with NOB out-selection downstream, the HRAS model was able to predict the measured higher fraction of CB in the A-stage effluent at lower SRTs and DO concentrations, and to achieve the benefits of operating an A-stage process, while maintaining an effluent COD:N ratio suitable for a downstream nitritation/denitritation process, an A-stage SRT in the range of 0.1 to 0.25 d should be maintained. This research also included an analysis of A-stage pilot data using stoichiometry to determine the bio-products formed from soluble substrate removed in an A-stage reactor. The results were used to further refine the process components and stoichiometric parameters to be used in the A-stage dynamic computer model, which includes process mechanisms for flocculation and enmeshment of particulate and colloidal substrate, hydrolysis, production of extracellular polymeric substances (EPS) and storage of soluble biodegradable substrate. Analysis of pilot data and simulations with the dynamic computer model implied (indirectly) that storage products were probably significant in A-stage COD removal.
132

Deposition Thickness Modeling and Parameter Identification for Spray Assisted Vacuum Filtration Process in Additive Manufacturing

Mark, August 01 January 2015 (has links)
To enhance mechanical and/or electrical properties of composite materials used in additive manufacturing, nanoparticles are often time deposited to form nanocomposite layers. To customize the mechanical and/or electrical properties, the thickness of such nanocomposite layers must be precisely controlled. A thickness model of filter cakes created through a spray assisted vacuum filtration is presented in this paper, to enable the development of advanced thickness controllers. The mass transfer dynamics in the spray atomization and vacuum filtration are studied for the mass of solid particles and mass of water in differential areas, and then the thickness of a filter cake is derived. A two-loop nonlinear constrained optimization approach is used to identify the unknown parameters in the model. Experiments involving depositing carbon nanofibers in a sheet of paper are used to measure the ability of the model to mimic the filtration process.
133

Process Modeling of Thermoplastics and Thermosetting Polymer Matrix Composites (PMCs) Manufactured Using Fused Deposition Modeling (FDM)

Hutten, Victoria Elizabeth January 2017 (has links)
No description available.
134

Modeling the Transient Effects during the Hot-Pressing of Wood-Based Composites

Zombori, Balazs Gergely 27 April 2001 (has links)
A numerical model based on fundamental engineering principles was developed and validated to establish a relationship between process parameters and the final properties of woodbased composite boards. The model simulates the mat formation, then compresses the reconstituted mat to its final thickness in a virtual press. The number of interacting variables during the hot-compression process is prohibitively large to assess a wide variety of data by experimental means. Therefore, the main advantage of the model based approach that the effect of the hot-compression parameters on the final properties of wood-based composite boards can be monitored without extensive experimentation. The mat formation part of the model is based on the Monte Carlo simulation technique to reproduce the spatial structure of the mat. The dimensions and the density of each flake are considered as random variables in the model, which follow certain probability density distributions. The parameters of these distributions are derived from data collected on industrial flakes by using an image analysis technique. The model can simulate the structure of a threelayer oriented strandboard (OSB) mat as well as the structure of random fiber networks. A grid is superimposed on the simulated mat and the number of flakes, the thickness, and the density of the mat at each grid point are computed. Additionally, the model predicts the change in several void volume fractions within the mat and the contact area between the flakes during consolidation. The void volume fractions are directly related to the physical properties of the mat, such as thermal conductivity, diffusivity, and permeability, and the contact area is an indicator of the effectively bonded area within the mat. The heat and mass transfer part of the model predicts the change of air content, moisture content, and temperature at designated mesh points in the cross section of the mat during the hotcompression. The water content is subdivided into vapor and bound water components. The free water component is not considered in the model due to the low (typically 6-7 %) initial moisture content of the flakes. The gas phase (air and vapor) moves by bulk flow and diffusion, while the bound water only moves by diffusion across the mat. The heat flow occurs by conduction and convection. The spatial derivatives of the resulting coupled partial differential equations are discretized by finite differences. The resulting ordinary differential equation in time is solved by a differential-algebraic system solver (DDASSL). The internal environment within the mat can be predicted among different initial and boundary conditions by this part of the hot-compression model. In the next phase of the research, the viscoelastic (time, temperature, and moisture dependent) response of the flakes was modeled using the time-temperature-moisture superposition principle of polymers. A master curve was created from data available in the literature, which describes the changing relaxation modulus of the flakes as a function of moisture and temperature at different locations in the mat. Then the flake mat was compressed in a virtual press. The stress-strain response is highly nonlinear due to the cellular structure of the mat. Hooke's Law was modified with a nonlinear strain function to account for the behavior of the flake mat in transverse compression. This part of the model gives insight into the vertical density profile formation through the thickness of the mat. Laboratory boards were produced to validate the model. A split-plot experimental design, with three different initial mat moisture contents (5, 8.5, 12 %), three final densities (609, 641, 673 kg êm3 or 38, 40, 42 lb ê ft3), two press platen temperatures (150, 200 °C), and three different press closing times (40, 60, 80 s) was applied to investigate the effect of production parameters on the internal mat conditions and the formation of the vertical density profile. The temperature and gas pressure at six locations in the mat, and the resultant density profiles of the laboratory boards, were measured. Adequate agreement was found between the model predicted and the experimentally measured temperature, pressure, and vertical density profiles. The complete model uses pressing parameters (press platen temperature, press schedule) and mat properties (flake dimensions and orientation, density distribution, initial moisture content and temperature) to predict the resulting internal conditions and vertical density profile formation within the compressed board. The density profile is related to all the relevant mechanical properties (bending strength, modulus of elasticity, internal bond strength) of the final board. The model can assist in the optimization of the parameters for hot-pressing woodbased composites and improve the performance of the final panel. / Ph. D.
135

Vacuum-Assisted Resin Transfer Molding (VARTM) Model Development, Verification, and Process Analysis

Sayre, Jay Randall 24 April 2000 (has links)
Vacuum-Assisted Resin Transfer Molding (VARTM) processes are becoming promising technologies in the manufacturing of primary composite structures in the aircraft industry as well as infrastructure. A great deal of work still needs to be done on efforts to reduce the costly trial-and-error methods of VARTM processing that are currently in practice today. A computer simulation model of the VARTM process would provide a cost-effective tool in the manufacturing of composites utilizing this technique. Therefore, the objective of this research was to modify an existing three-dimensional, Resin Film Infusion (RFI)/Resin Transfer Molding (RTM) model to include VARTM simulation capabilities and to verify this model with the fabrication of aircraft structural composites. An additional objective was to use the VARTM model as a process analysis tool, where this tool would enable the user to configure the best process for manufacturing quality composites. Experimental verification of the model was performed by processing several flat composite panels. The parameters verified included flow front patterns and infiltration times. The flow front patterns were determined to be qualitatively accurate, while the simulated infiltration times over predicted experimental times by 8 to 10%. Capillary and gravitational forces were incorporated into the existing RFI/RTM model in order to simulate VARTM processing physics more accurately. The theoretical capillary pressure showed the capability to reduce the simulated infiltration times by as great as 6%. The gravity, on the other hand, was found to be negligible for all cases. Finally, the VARTM model was used as a process analysis tool. This enabled the user to determine such important process constraints as the location and type of injection ports and the permeability and location of the high-permeable media. A process for a three-stiffener composite panel was proposed. This configuration evolved from the variation of the process constraints in the modeling of several different composite panels. The configuration was proposed by considering such factors as: infiltration time, the number of vacuum ports, and possible areas of void entrapment. / Ph. D.
136

Process Modeling, Performance Analysis and Configuration Simulation in Integrated Supply Chain Network Design

Dong, Ming 27 August 2001 (has links)
Supply chain management has been recently introduced to address the integration of organizational functions ranging from the ordering and receipt of raw materials throughout the manufacturing processes, to the distribution and delivery of products to the customer. Its application demonstrates that this idea enables organizations to achieve higher quality products, better customer service, and lower inventory cost. In order to achieve high performance, supply chain functions must operate in an integrated and coordinated manner. Several challenging problems associated with integrated supply chain design are: (1) how to model and coordinate the supply chain business processes, specifically in the area of supply chain workflows; (2) how to analyze the performance of an integrated supply chain network so that optimization techniques can be employed to improve customer service and reduce inventory cost; and (3) how to evaluate dynamic supply chain networks and obtain a comprehensive understanding of decision-making issues related to supply network configurations. These problems are most representative in the supply chain theory's research and applications. There are three major objectives for this research. The first objective is to develop viable modeling methodologies and analyzing algorithms for supply chain business processes so that the logic properties of supply chain process models can be analyzed and verified. This problem has not been studied in integrated supply chain literature to date. To facilitate the modeling and verification analysis of supply chain workflows, an object-oriented Petri nets based modular modeling and analyzing approach is presented. The proposed, structured, process-modeling algorithm provides an effective way to design structured supply chain business processes. The second objective is to develop a network of inventory-queue models for the performance analysis and optimization of an integrated supply network with inventory control at all sites. An inventory-queue is a queueing model that incorporates an inventory replenishment policy for the output store. This dissertation extends the previous work done on the supply network model with base-stock control and service requirements. Instead of one-for-one base stock policy, batch-ordering policy and lot-sizing problems are considered. To determine the replenishment lead times of items at the stores, a fixed-batch target-level production authorization mechanism is employed to explicitly obtain performance measures of the supply chain queueing model. The validity of the proposed model is illustrated by comparing the results from the analytical performance evaluation model and those obtained from the simulation study. The third objective is to develop simulation models for understanding decision-making issues of the supply chain network configuration in an integrated environment. Simulation studies investigate multi-echelon distribution systems with installation stock reorder policy and echelon stock reorder policy. The results show that, depending on the structure of multi-echelon distribution systems, either echelon stock or installation stock policy may be advantageous. This dissertation presents a new transshipment policy, called "alternate transshipment policy," to improve supply chain performance. In an integrated supply chain network that considers both the distribution function and the manufacturing function, the impacts of component commonality on network performance are also evaluated. The results of analysis-of-variance and Tukey's tests reveal that there is a significant difference in performance measures, such as delivery time and order fill rates, when comparing an integrated supply chain with higher component commonality to an integrated supply chain with lower component commonality. Several supply chain network examples are employed to substantiate the effectiveness of the proposed methodologies and algorithms. / Ph. D.
137

Integrated Process Modeling and Data Analytics for Optimizing Polyolefin Manufacturing

Sharma, Niket 19 November 2021 (has links)
Polyolefins are one of the most widely used commodity polymers with applications in films, packaging and automotive industry. The modeling of polymerization processes producing polyolefins, including high-density polyethylene (HDPE), polypropylene (PP), and linear low-density polyethylene (LLDPE) using Ziegler-Natta catalysts with multiple active sites, is a complex and challenging task. In our study, we integrate process modeling and data analytics for improving and optimizing polyolefin manufacturing processes. Most of the current literature on polyolefin modeling does not consider all of the commercially important production targets when quantifying the relevant polymerization reactions and their kinetic parameters based on measurable plant data. We develop an effective methodology to estimate kinetic parameters that have the most significant impacts on specific production targets, and to develop the kinetics using all commercially important production targets validated over industrial polyolefin processes. We showcase the utility of dynamic models for efficient grade transition in polyolefin processes. We also use the dynamic models for inferential control of polymer processes. Thus, we showcase the methodology for making first-principle polyolefin process models which are scientifically consistent, but tend to be less accurate due to many modeling assumptions in a complex system. Data analytics and machine learning (ML) have been applied in the chemical process industry for accurate predictions for data-based soft sensors and process monitoring/control. Specifically, for polymer processes, they are very useful since the polymer quality measurements like polymer melt index, molecular weight etc. are usually less frequent compared to the continuous process variable measurements. We showcase the use of predictive machine learning models like neural networks for predicting polymer quality indicators and demonstrate the utility of causal models like partial least squares to study the causal effect of the process parameters on the polymer quality variables. ML models produce accurate results can over-fit the data and also produce scientifically inconsistent results beyond the operating data range. Thus, it is growingly important to develop hybrid models combining data-based ML models and first-principle models. We present a broad perspective of hybrid process modeling and optimization combining the scientific knowledge and data analytics in bioprocessing and chemical engineering with a science-guided machine learning (SGML) approach and not just the direct combinations of first-principle and ML models. We present a detailed review of scientific literature relating to the hybrid SGML approach, and propose a systematic classification of hybrid SGML models according to their methodology and objective. We identify the themes and methodologies which have not been explored much in chemical engineering applications, like the use of scientific knowledge to help improve the ML model architecture and learning process for more scientifically consistent solutions. We apply these hybrid SGML techniques to industrial polyolefin processes such as inverse modeling, science guided loss and many others which have not been applied previously to such polymer applications. / Doctor of Philosophy / Almost everything we see around us from furniture, electronics to bottles, cars, etc. are made fully or partially from plastic polymers. The two most popular polymers which comprise almost two-thirds of polymer production globally are polyethylene (PE) and polypropylene (PP), collectively known as polyolefins. Hence, the optimization of polyolefin manufacturing processes with the aid of simulation models is critical and profitable for chemical industry. Modeling of a chemical/polymer process is helpful for process-scale up, product quality estimation/monitoring and new process development. For making a good simulation model, we need to validate the predictions with actual industrial data. Polyolefin process has complex reaction kinetics with multiple parameters that need to be estimated to accurately match the industrial process. We have developed a novel strategy for estimating the kinetics for the model, including the reaction chemistry and the polymer quality information validating with industrial process. Thus, we have developed a science-based model which includes the knowledge of reaction kinetics, thermodynamics, heat and mass balance for the polyolefin process. The science-based model is scientifically consistent, but may not be very accurate due to many model assumptions. Therefore, for applications requiring very high accuracy predicting any polymer quality targets such as melt index (MI), density, data-based techniques might be more appropriate. Recently, we may have heard a lot about artificial intelligence (AI) and machine learning (ML) the basic principle behind these methods is to making the model learn from data for prediction. The process data that are measured in a chemical/polymer plant can be utilized for data analysis. We can build ML models to predict polymer targets like MI as a function of the input process variables. The ML model predictions are very accurate in the process operating range of the dataset on which the model is learned, but outside the prediction range, they may tend to give scientifically inconsistent results. Thus, there is a need to combine the data-based models and scientific models. In our research, we showcase novel approaches to integrate the science-based models and the data-based ML methodology which we term as the hybrid science-guided machine learning methods (SGML). The hybrid SGML methods applied to polyolefin processes yield not only accurate, but scientifically consistent predictions which can be used for polyolefin process optimization for applications like process development and quality monitoring.
138

A Dual Metamodeling Perspective for Design and Analysis of Stochastic Simulation Experiments

Wang, Wenjing 17 July 2019 (has links)
Fueled by a growing number of applications in science and engineering, the development of stochastic simulation metamodeling methodologies has gained momentum in recent years. A majority of the existing methods, such as stochastic kriging (SK), only focus on efficiently metamodeling the mean response surface implied by a stochastic simulation experiment. As the simulation outputs are stochastic with the simulation variance varying significantly across the design space, suitable methods for variance modeling are required. This thesis takes a dual metamodeling perspective and aims at exploiting the benefits of fitting the mean and variance functions simultaneously for achieving an improved predictive performance. We first explore the effects of replacing the sample variances with various smoothed variance estimates on the performance of SK and propose a dual metamodeling approach to obtain an efficient simulation budget allocation rule. Second, we articulate the links between SK and least-square support vector regression and propose to use a ``dense and shallow'' initial design to facilitate selection of important design points and efficient allocation of the computational budget. Third, we propose a variational Bayesian inference-based Gaussian process (VBGP) metamodeling approach to accommodate the situation where either one or multiple simulation replications are available at every design point. VBGP can fit the mean and variance response surfaces simultaneously, while taking into full account the uncertainty in the heteroscedastic variance. Lastly, we generalize VBGP for handling large-scale heteroscedastic datasets based on the idea of ``transductive combination of GP experts.'' / Doctor of Philosophy / In solving real-world complex engineering problems, it is often helpful to learn the relationship between the decision variables and the response variables to better understand the real system of interest. Directly conducting experiments on the real system can be impossible or impractical, due to the high cost or time involved. Instead, simulation models are often used as a surrogate to model the complex stochastic systems for conducting simulation-based design and analysis. However, even simulation models can be very expensive to run. To alleviate the computational burden, a metamodel is often built based on the outputs of the simulation runs at some selected design points to map the performance response surface as a function of the controllable decision variables, or uncontrollable environmental variables, to approximate the behavior of the original simulation model. There has been a plethora of work in the simulation research community dedicated to studying stochastic simulation metamodeling methodologies suitable for analyzing stochastic simulation experiments in science and engineering. A majority of the existing methods, such as stochastic kriging (SK), have been known as effective metamodeling tool for approximating a mean response surface implied by a stochastic simulation. Despite that SK has been extensively used as an effective metamodeling methodology for stochastic simulations, SK and metamodeling techniques alike still face four methodological barriers: 1) Lack of the study in variance estimates methods; 2) Absence of an efficient experimental design for simultaneous mean and variance metamodeling; 3) Lack of flexibility to accommodate situations where simulation replications are not available; and 4) Lack of scalability. To overcome the aforementioned barriers, this thesis takes a dual metamodeling perspective and aims at exploiting the benefits of fitting the mean and variance functions simultaneously for achieving an improved predictive performance. We first explore the effects of replacing the sample variances with various smoothed variance estimates on the performance of SK and propose a dual metamodeling approach to obtain an efficient simulation budget allocation rule. Second, we articulate the links between SK and least-square support vector regression and propose to use a “dense and shallow” initial design to facilitate selection of important design points and efficient allocation of the computational budget. Third, we propose a variational Bayesian inference-based Gaussian process (VBGP) metamodeling approach to accommodate the situation where either one or multiple simulation replications are available at every design point. VBGP can fit the mean and variance response surfaces simultaneously, while taking into full account the uncertainty in the heteroscedastic variance. Lastly, we generalize VBGP for handling large-scale heteroscedastic datasets based on the idea of “transductive combination of GP experts.”
139

Approche générique pour la modélisation et l'implémentation des processus / A generic approach for process modeling and implementation

Ulmer, Jean-Stéphane 11 February 2011 (has links)
Une entreprise doit être capable de décrire et de demeurer réactive face à un événement endogène ou exogène. Une telle flexibilité peut s'obtenir par la gestion des processus d'entreprise (Business Process Management - BPM). Lors d'une démarche BPM, différentes transformations interviennent sur les modèles de processus développés par l'analyste métier et l'expert en technologies de l'information. Un non-alignement se crée entre ces modèles hétérogènes lors de leurs manipulations : il s'agit du "fossé métier-TI" tel que décrit dans la littérature. L'objectif de notre travail est de proposer un cadre méthodologique permettant un meilleur pilotage des processus métier, afin de tendre vers un alignement systématique de leur modélisation à leur implémentation au sein du système cible. A l'aide de concepts issus de l'ingénierie d'Entreprise et des Systèmes d'Informations dirigée par les modèles et des TI, nous définissons une démarche générique assurant une cohérence intermodèle. Son rôle est de conserver et de fournir toutes les informations liées à la structure et à la sémantique des modèles. En permettant la restitution intégrale d'un modèle transformé au sens de l'ingénierie inverse, notre plateforme permet une synchronisation entre modèle d'analyse et modèle d'implémentation. Le manuscrit présente également l'adéquation possible entre l'ingénierie des procédés et le BPM à travers un point de vue multi-échelle. / A company must be able to describe and to remain responsive against endogenous or exogenous events. Such flexibility can be obtained with the Business Process Management (BPM). Through a BPM approach, different transformations operate on process models, developed by the business analyst and IT expert. A non-alignment is created between these heterogeneous models during their manipulation: this is the "business-IT gap" as described in the literature. The objective of our work is to propose a methodological framework for a better management of business processes in order to reach a systematic alignment from their modelling to their implementation within the target system. Using concepts from Model-driven Enterprise and Information System engineering, we define a generic approach ensuring an intermodal consistency. Its role is to maintain and provide all information related to the model structure and semantics. By allowing a full restitution of a transformed model, in the sense of reverse engineering, our platform enables synchronization between analysis model and implementation model. The manuscript also presents the possible match between process engineering and BPM through a multi- erspective scale.
140

Experimental and theoretical assessment of thin glass panels as interposers for microelectronic packages

McCann, Scott R. 22 May 2014 (has links)
As the microelectronic industry moves toward stacking of dies to achieve greater performance and smaller footprint, there are several reliability concerns when assembling the stacked dies on current organic substrates. These concerns include excessive warpage, interconnect cracking, die cracking, and others. Silicon interposers are being developed to assemble the stacked dies, and then the silicon interposers are assembled on organic substrates. Although such an approach could address stacked-die to interposer reliability concerns, there are still reliability concerns between the silicon interposer and the organic substrate. This work examines the use of diced glass panel as an interposer, as glass provides intermediate coefficient of thermal expansion between silicon and organics, good mechanical rigidity, large-area panel processing for low cost, planarity, and better electrical properties. However, glass is brittle and low in thermal conductivity, and there is very little work in existing literature to examine glass as a potential interposer material. Starting with a 150 x 150 mm glass panel with a thickness of 100 µm, this work has built alternating layers of dielectric and copper on both sides of the panel. The panels have gone through typical cleanroom processes such as lithography, electroplating, etc. Upon fabrication, the panels are diced into individual substrates of 25 x 25 mm and a 10 x 10 mm flip chip with a solder bump pitch of 75 um is then reflow attached to the glass substrate followed by underfill dispensing and curing. The warpage of the flip-chip assembly is measured. In parallel to the experiments, numerical models have been developed. These models account for viscoplastic behavior of the solder. The models also mimic material addition and etching through element “birth-and-death” approach. The warpage from the models has been compared against experimental measurements for glass substrates with flip chip assembly. It is seen that the glass substrates provide significantly lower warpage compared to organic substrates, and thus could be a potential candidate for future 3D systems.

Page generated in 0.0959 seconds