• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 12
  • 3
  • 1
  • Tagged with
  • 23
  • 23
  • 15
  • 10
  • 8
  • 6
  • 5
  • 4
  • 4
  • 4
  • 4
  • 4
  • 3
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Synthesis, optimisation and control of crystallization systems

Sheikh, Ahmad Yahya January 1997 (has links)
Process systems engineering has provided with a range of powerful tools to chemical engineers for synthesis, optimisation and control using thorough understanding of the processes enhanced with the aid of sophisticated and accurate multi-faceted mathematical models. Crystallization processes have rarely benefited from these new techniques, for they lack in models that could be used to bridge the gaps in their perception before utilising the resulting insight for the three above mentioned tasks. In the present work, first a consistent and sufficiently complex models for unit operations including MSMPR crystallizer, hydrocyclone and fines dissolver are developed to enhance the understanding of systems comprising these units. This insight is then utilised for devising innovative techniques to synthesise, optimise and control such processes. A constructive targeting approach is developed for innovative synthesis of stage-wise crystallization processes. The resulting solution surpasses the performance obtained from conventional design procedure not only because optimal temperature profiles are used along the crystallizers but also the distribution of feed and product removal is optimally determined through non-linear programming. The revised Machine Learning methodology presented here for continual process improvement by analysing process data and representing the findings as zone of best average performance, has directly utilised the models to generate the data in the absence of real plant data. The methodology which is demonstrated through KNO₃ crystallization process flowsheet quickly identifies three opportunities each representing an increase of 12% on nominal operation. An optimal multi-variable controller has been designed for a one litre continuous recycle crystallizer to indirectly control total number and average size of crystals from secondary process measurements. The system identification is solely based on experimental findings. Linear Quadratic Gaussian method based design procedure is developed to design the controller which not only shows excellent set-point tracking capabilities but also effectively rejects disturbance in the simulated closed loop runs.
2

Inclusion of leakage into life cycle management of products involving plastic as a material choice

Chitaka, Takunda Yeukai 19 January 2021 (has links)
The accumulation of plastic waste in the natural environment has been a major environmental concern for many decades. However, the environmental impacts associated with leakage are not taken into consideration under current life-cycle based approaches, despite packaging being a major application area of life cycle assessment. Furthermore, there is limited quantitative information on the leakage propensities and rates of different products. This presents a critical limitation during the life cycle management (LCM) of products destined for regions where they are likely to be dumped or littered. This thesis investigates the feasibility and influence of using product specific leakage rates as a proxy indicator for potential marine environmental impacts, to inform the life cycle management of products in which plastic is a material choice. In particular, it explores whether a realistic understanding of leakage rates, differentiated by major use, may facilitate the development of effective interventions to mitigate the growing problem of marine plastic pollution. This entails the quantification of leakage rates for selected plastic items identified as highly prone to leakage based on a series of beach surveys. The potential influence of providing such specific knowledge is investigated via the exploration of current LCM practices for plastic products employed by key value-chain actors in the plastics industry. In addition, the life cycle management of three key items identified as problematic (straws, cotton bud sticks and beverage bottle lids) is explored via a case study approach. Beach accumulation surveys are often used to estimate plastic flows into the marine environment. Thus, two series of beach surveys were conducted across five beaches with varying catchment area characteristics in Cape Town, over two periods in 2017 and 2018 – 2019 respectively. Daily accumulation rates varied across all sites ranging from 38 – 2962 items.day-1 .100m-1 during the first sampling period and 305 – 2082 items.day-1 .100m-1 during the second. Plastic was the major contributor accounting for 85.6 – 98.9% of all items by count. Despite the variations in litter accumulation rates and composition, there was significant commonality in the items which were identified as major contributors. The top 12 most prevalent and abundant identifiable plastic items accounted for 43 – 66% during the first sampling period, and 41 – 73% during the second. Ten of these items were prevalent during both periods, eight of which were associated with food consumed on-the-go, including beverage bottle lids, polystyrene food containers, single sweet wrappers, snack packets and straws. This indicates that the high litterability of these items was consistent across catchment areas and sampling periods. Furthermore, when ratioed to waste generation, items found to be major contributors were found to have significantly higher leakage rates in comparison to less prevalent items. The increasing concern surrounding plastic pollution has pressured value-chain actors to review their approaches to the life cycle management of plastic products. This has led to the development of strategies focussed on plastic packaging which were not commonplace across all companies. However, these strategies are not necessarily aimed at mitigating plastic pollution but are more broadly concerned with sustainable product design, emphasising design for recycling and supporting recycling activities at end-of-life as part of their extended producer responsibility. Thus, the extent to which these strategies address plastic pollution is limited. Furthermore, value-chain actors reported varied approaches to product prioritisation for intervention which are often not grounded in empirical evidence but instead based on anecdotes and limited logic. This may be attributed to a lack of reliable product-specific information surrounding plastic pollution. Such approaches have the potential to prioritise products ii which are not major contributors to marine pollution in lieu of those that are. Interventions targeted towards products that were identified as prone to leakage, including straws and cotton bud sticks, were catalysed by consumer pressure and societal expectations at large. Ultimately, this thesis demonstrates the need for product-specific knowledge on leakage to facilitate responsible and effective life cycle management of products involving plastic as a material choice. Furthermore, it has demonstrated the feasibility of providing such information through the use of leakage rates. Leakage rates have the potential to play an important role in product life cycle management, allowing for the identification of products which are highly prone to leakage into the environment. Thus, their integration into LCM practice has the potential to facilitate the development of targeted strategies to address plastic pollution.
3

Integration of maintenance optimization in process design and operation under uncertainty

Vassiliadis, Constantine January 2000 (has links)
No description available.
4

Resilient engineered systems: the development of an inherent system property

Mitchell, Susan McAlpin 17 September 2007 (has links)
Protecting modern engineered systems has become increasingly difficult due to their complexity and the difficulty of predicting potential failures. With the added threat of terrorism, the desire to design systems resilient to potential faults has increased. The concept of a resilient system – one that can withstand unanticipated failures without disastrous consequences – provides promise for designing safer systems. Resilience has been recognized in research settings as a desired end product of specific systems, but resilience as a general, inherent, measurable property of systems had yet to be established. To achieve this goal, system resilience was related to an established concept, the resiliency of a material. System resilience was defined as the amount of energy a system can store before reaching a point of instability. The energy input into each system as well as the system’s exergy were used to develop system stress and system strain variables. Process variable changes to four test systems – a steam pipe, a water pipe, a water pump, and a heat exchanger – were applied to obtain series of system stress and system strain data that were then graphed to form characteristic system response curves. Resilience was quantified by performing power-law regression on each curve to determine the variable ranges where the regression line accurately described the data and where the data began to deviate from that power-law trend. Finally, the four test systems were analyzed in depth by combining them into an overall system using the process simulator ASPEN. The ranges predicted by the overall system data were compared to the ranges predicted for the individual equipment. Finally, future work opportunities were outlined to show potential areas for expansion of the methodology.
5

Resilient engineered systems: the development of an inherent system property

Mitchell, Susan McAlpin 17 September 2007 (has links)
Protecting modern engineered systems has become increasingly difficult due to their complexity and the difficulty of predicting potential failures. With the added threat of terrorism, the desire to design systems resilient to potential faults has increased. The concept of a resilient system – one that can withstand unanticipated failures without disastrous consequences – provides promise for designing safer systems. Resilience has been recognized in research settings as a desired end product of specific systems, but resilience as a general, inherent, measurable property of systems had yet to be established. To achieve this goal, system resilience was related to an established concept, the resiliency of a material. System resilience was defined as the amount of energy a system can store before reaching a point of instability. The energy input into each system as well as the system’s exergy were used to develop system stress and system strain variables. Process variable changes to four test systems – a steam pipe, a water pipe, a water pump, and a heat exchanger – were applied to obtain series of system stress and system strain data that were then graphed to form characteristic system response curves. Resilience was quantified by performing power-law regression on each curve to determine the variable ranges where the regression line accurately described the data and where the data began to deviate from that power-law trend. Finally, the four test systems were analyzed in depth by combining them into an overall system using the process simulator ASPEN. The ranges predicted by the overall system data were compared to the ranges predicted for the individual equipment. Finally, future work opportunities were outlined to show potential areas for expansion of the methodology.
6

A Trust Region Filter Algorithm for Surrogate-based Optimization

Eason, John P. 01 April 2018 (has links)
Modern nonlinear programming solvers can efficiently handle very large scale optimization problems when accurate derivative information is available. However, black box or derivative free modeling components are often unavoidable in practice when the modeled phenomena may cross length and time scales. This work is motivated by examples in chemical process optimization where most unit operations have well-known equation oriented representations, but some portion of the model (e.g. a complex reactor model) may only be available with an external function call. The concept of a surrogate model is frequently used to solve this type of problem. A surrogate model is an equation oriented approximation of the black box that allows traditional derivative based optimization to be applied directly. However, optimization tends to exploit approximation errors in the surrogate model leading to inaccurate solutions and repeated rebuilding of the surrogate model. Even if the surrogate model is perfectly accurate at the solution, this only guarantees that the original problem is feasible. Since optimality conditions require gradient information, a higher degree of accuracy is required. In this work, we consider the general problem of hybrid glass box/black box optimization, or gray box optimization, with focus on guaranteeing that a surrogate-based optimization strategy converges to optimal points of the original detailed model. We first propose an algorithm that combines ideas from SQP filter methods and derivative free trust region methods to solve this class of problems. The black box portion of the model is replaced by a sequence of surrogate models (i.e. surrogate models) in trust region subproblems. By carefully managing surrogate model construction, the algorithm is guaranteed to converge to true optimal solutions. Then, we discuss how this algorithm can be modified for effective application to practical problems. Performance is demonstrated on a test set of benchmarks as well as a set of case studies relating to chemical process optimization. In particular, application to the oxycombustion carbon capture power generation process leads to significant efficiency improvements. Finally, extensions of surrogate-based optimization to other contexts is explored through a case study with physical properties.
7

Simulation and optimisation of industrial steam reformers : development of models for both primary and secondary steam reformers and implementation of optimisation to improve both the performance of existing equipment and the design of future equipment

Dunn, Austin James January 2004 (has links)
Traditionally the reactor is recognised as the `heart' of a chemical process system and hence the focus on this part of the system is usually quite detailed. Steam reforming, however, due to the `building block' nature of its reaction products is unusual and generally is perceived as a `utility' to other reaction processes and hence the focus is drawn " towards the 'main' reaction processes of the system. Additionally as a `mature' process, steam reforming is often treated as sufficiently defined for the requirements within the overall chemical process. For both primary and secondary steam reformers several models of varying complexity were developed which allowed assessment of issues raised about previous models and model improvements; drawing on the advancements in modelling that have not only allowed the possibility of increasing the scope of simulations but also increased confidence in the simulation results. Despite the complex nature of the steam reforming systems, a surprisingly simplistic model is demonstrated to perform well, however, to improve on existing designs and maximise the capability of current designs it is shown that more complex models are required. After model development the natural course is to optimisation. This is a powerful tool which must be used carefully as significant issues remain around its employment. Despite the remaining concerns, some simple optimisation cases showed the potential of the models developed in this work and although not exhaustive demonstrated the benefits of optimisation.
8

Transformation of Biomass and Shale Gas Carbon to Fuels and Chemicals

Taufik Ridha (5930192) 03 January 2019 (has links)
<div>Currently, fossil resources dominate fuel and chemical production landscape. Besides concerns related to the ever-increasing greenhouse gas emission, fossil resources are also limited. In a petroleum-deprived future, sustainably available biomass can serve as a renewable carbon source. Due to its limited availability, however, this biomass resource must be utilized and converted effciently to minimize carbon losses to undesirable by-products. A modeling and optimization approach that can identify optimal process congurations for chemical and fuel production from biomass using stoichiometric and thermodynamic knowledge of the underlying biomass reaction system is proposed in this dissertation. Several case studies were performed with this approach, and the outcomes found agreement with reported experimental results. In particular, a case study on fast-hydropyrolysis vapor of cellulose led to the discovery of new reaction route and provided insights in comprehending the formation of experimentally observed molecules. The modeling and optimization approach consists of two main steps. The rst step is the generation of the search space and the second step is the identication of all optimal reaction routes.</div><div><br></div><div><div>For the rst step, literature review and automated reaction network generator are employed to identify all possible processes for biomass conversion. Through literature review, yield data on processes that generate biomass-derived molecules are collected. As these biomass-derived molecules often possess multiple functional groups, utilization of automated reaction network generator, which considers a set of biomass-derived molecules and reaction rules, enables generation of all possible reactions. In this work, an automated reaction network generator tool called Rule Input Network Generator is utilized. Using this generated search space, a mathematical optimization problem, which identies the optimal reaction network, is constructed. For the second step, the optimization problem identies all reaction routes with the minimum number of reactions for a given set of biomass and target products. This formulation constructs a process superstructure that contains processes that generate biomass-derived molecules and all possible reactions from biomass-derived molecules. In this optimization problem, the main constraint for the reaction is its thermodynamic favorability within a certain temperature range. Using optimization solver, optimal solutions for this problem are obtained.</div></div><div><br></div><div><div>Using this developed approach, a case study on upgrading fast-hydropyrolysis vapor of cellulose to higher molecular weight products was investigated. Levoglucosan and glycolaldehyde are major components from fast-hydropyrolysis of cellulose. This approach identied a reaction route that can upgrade these molecules to hydrocarbons with carbon number ranging from eight to 12 and this route has not been reported in the literature. The coupling of levoglucosan and glycolaldehyde requires a key intermediate, levoglucosenone, which is identied by this approach. Preliminary experimental results suggest that the proposed reactions are feasible and this serves as another validation for this approach. Other potential pathways to not only branched alkanes, but also substituted cycloalkanes and aromatics, were also identied. Molecules with those structures have been observed experimentally, and potential pathways to those molecules can provide insights for experimentalists as to how these products can form and which intermediates may lead to their formations. This approach has not only revealed unknown reaction routes, but also provided insights for experimentalists for analyzing complex systems.</div></div><div><br></div><div><div>Toward reduction of carbon losses toward char during fast pyrolysis, potential pathways toward char formation during fast pyrolysis were proposed. Investigating proposed char precursors identied using mass spectroscopy, several potential pathways toward the formation of these char precursors were obtained, which include initial insights to the potential driving force for the formation of these char precursors and, ultimately, char itself.</div></div><div><br></div><div><div>Going beyond fast pyrolysis, primary processes that have been developed in C3Bio along with several existing primary processes were considered in order to identify optimal biorenery congurations. This approach identied biorenery congurations with carbon effciencies from 60-64%. These congurations generate not only fuel type molecules, but also commodity chemicals that are being produced in a traditional renfiery. In addition, it is capable of providing these products at their current relative production rates in the United States. Other studies on biorefinery reported only 25-59% carbon effciency and generated mostly fuel-type molecules. Therefore, this approach not only indicates the appropriate reaction sequences, but also optimal utilization of carbon in biomass-derived molecules. This dissertation provides an initial roadmap toward sustainable production of fuels and chemicals from lignocellulosic biomass.</div></div><div><br></div><div><div>Considering that the transition to renewable energy is gradual and shale resource is an abundant fossil resource in the United States, opportunities to valorize shale gas condensate are explored. Recent shale gas boom has transformed the United States energy landscape. Most of the major shale basins are located in remote locations and historically non-gas producing regions. Therefore, many major shale basins regions are lacking the infrastructure to distribute the extracted gas into the rest of the US and particularly the Gulf Coast region. In this dissertation, shale gas catalytic upgrading processes were synthesized, designed, and simulated using Aspen Plus Simulation. Using Aspen Economic Analyzer, preliminary techno-economic analysis and evaluation of its economic potential were assessed at varying scales to assess its impact on the</div><div>United States chemical industry landscape.</div></div>
9

Data Analytics Methods for Enterprise-wide Optimization Under Uncertainty

Calfa, Bruno Abreu 01 April 2015 (has links)
This dissertation primarily proposes data-driven methods to handle uncertainty in problems related to Enterprise-wide Optimization (EWO). Datadriven methods are characterized by the direct use of data (historical and/or forecast) in the construction of models for the uncertain parameters that naturally arise from real-world applications. Such uncertainty models are then incorporated into the optimization model describing the operations of an enterprise. Before addressing uncertainty in EWO problems, Chapter 2 deals with the integration of deterministic planning and scheduling operations of a network of batch plants. The main contributions of this chapter include the modeling of sequence-dependent changeovers across time periods for a unitspecific general precedence scheduling formulation, the hybrid decomposition scheme using Bilevel and Temporal Lagrangean Decomposition approaches, and the solution of subproblems in parallel. Chapters 3 to 6 propose different data analytics techniques to account for stochasticity in EWO problems. Chapter 3 deals with scenario generation via statistical property matching in the context of stochastic programming. A distribution matching problem is proposed that addresses the under-specification shortcoming of the originally proposed moment matching method. Chapter 4 deals with data-driven individual and joint chance constraints with right-hand side uncertainty. The distributions are estimated with kernel smoothing and are considered to be in a confidence set, which is also considered to contain the true, unknown distributions. The chapter proposes the calculation of the size of the confidence set based on the standard errors estimated from the smoothing process. Chapter 5 proposes the use of quantile regression to model production variability in the context of Sales & Operations Planning. The approach relies on available historical data of actual vs. planned production rates from which the deviation from plan is defined and considered a random variable. Chapter 6 addresses the combined optimal procurement contract selection and pricing problems. Different price-response models, linear and nonlinear, are considered in the latter problem. Results show that setting selling prices in the presence of uncertainty leads to the use of different purchasing contracts.
10

Estimation of particle size distributions in mineral process systems using acoustic techniques

Swanepoel, Francois 04 1900 (has links)
Thesis (MEng)--Stellenbosch University, 2000. / ENGLISH ABSTRACT: A desire to increase the efficiency of the comminution process in mineral process systems has led to the need of determining the size distribution of ore particles at various stages in the system. The objective of this research is to investigate the feasibility of the use of an acoustic sensor for measuring particle size distribution. The acoustic signal generated when the particles impact on a cantilever bar is analysed using digital signal processing techniques. As rocks fall onto a metal bar, the bar vibrates. The vibrations contain information th a t is extracted to determine the size of particles tha t impacted on the bar. The bar is modelled as a linear system which is excited by impulses (impact of particles). The response of the bar is deconvolved from the acoustic signal to obtain an impulse whose amplitude is proportional to the energy of the impact. In order to improve size estimates, deconvolution is performed using a statistical model of the impulse sequence (Bernoulli-Gaussian) and then estimated using MAP estimation. Size estimates are not only a function of the mass of particles, but also on the exact position of impact on the bar. Since there is always a variation in the position of impact, size estimates are erroneous. It was found that the position of impact can be determined as to reduce variances dramatically. Due to physical sampling in space, the sensor has a bias towards larger particles. We show how this can be represented mathematically and removed. This project is mainly concerned with rocks in the +8-25mm (+0,7-22 gram) size range. / AFRIKAANSE OPSOMMING: Vergruising van erts in die mineraalbedryf verg groot hoeveelhede energie. Daar is ’n behoefte gei'dentifiseer orn hierdie proses meer effektief te maak. Aangesien die effektiwiteit van ’n meul ’n funksie is van die ertsgroottes wat gemaal word, kan partikel grootte inligting aangewend word om effektiwiteit te bevorder. Die doel van hierdie tesis is om die lewensvatbaarheid van ’n akoestiese sensor vir die doel van partikelgrootte estimasie, te ondersoek. Erts partikels wat val vanaf ’n vervoerband op ’n kantelbalk, veroorsaak dat die balk vibreer. Deur hierdie vibrasies te meet en verwerk, kan inligting aangaande partikel grootte verkry word. Die stelsel word gemodelleer as ’n lineere sisteem met impulse as intree. Die geobserveerde sein is die konvolusie tussen die intree impulse en die impulsweergawe van die sisteem. Deur gebruik te maak van ’n statistiese model en MAP-estimasie, word die effek van die sisteem gedekonvuleer vanaf die geobserveerde sein om ’n benadering van die intree impuls sein te verkry. Die amplitudes van die impulse word gebruik as ’n aanduiding van partikel massa. Partikelgroottes soos benader deur die stelsel, is ’n funksie van die die posisie waar die partikel die balk tref. Deur van patroonherkenning tegnieke gebruik te maak, word die posisie van impak bepaal om sodoende grootte benaderings aan te pas en die variansie van grootte verspreidings te verminder. As gevolg van die feit dat partikels gemonster word deurdat slegs ’n klein persentasie van die hele omvang van partikels ondersoek word, onstaan daar ’n oorhelling ( “bias” ) na groter partikels. Die kans dat groter partikels die balk tref is groter as vir klein partikels. ’n Wiskundige model vir hierdie verskynsel word voorgestel en gewys hoe die die oorhelling geneutraliseer kan word. Hierdie projek het te doen met ertsgroottes +8-25mm (+0,7-22 gram).

Page generated in 0.0902 seconds