• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 41
  • 30
  • 27
  • 7
  • 5
  • 4
  • 4
  • 3
  • 3
  • 3
  • 2
  • 1
  • Tagged with
  • 138
  • 138
  • 31
  • 25
  • 21
  • 20
  • 19
  • 16
  • 15
  • 14
  • 14
  • 13
  • 13
  • 13
  • 12
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
81

Optimization of Configuration Management Processes

Kristensson, Johan January 2016 (has links)
Configuration management is a process for establishing and maintaining consistency of a product's performance, as well as functional and physical attributes with regards to requirements, design and operational information throughout its lifecycle. The way configuration management is implemented in a project has a huge impact on the project’s chance of success. Configuration management is, however, notoriously difficult to implement in a good way, i.e. in such a way that it increases performance and decrease the risk of projects. What works well in one field may be difficult to implement or will not work in another. The aim of this thesis is to present a process for optimizing configuration management processes, using a telecom company as a case study. The telecom company is undergoing a major overhaul of their customer relationship management system, and they have serious issues with quality of the software that is produced and meeting deadlines, and therefore wants to optimize its existing CM processes in order to help with these problems. Data collected in preparation for the optimization revealed that configuration management tools were not used properly, tasks that could be automated were done manually, and existing processes were not built on sound configuration management principles. The recommended optimization strategy would have been to fully implement a version handling tool, and change the processes to take better advantage of a properly implemented version handling tool. This was deemed too big a change though, so instead a series of smaller changes with less impact were implemented, with the aim of improving quality control to minimize the number of bugs that reached production. The majority of the changes had the purpose of replicating the most basic functions of a version handling tool, as well as automating manual tasks that were error prone. / Configuration management är en process för att etablera och bevara konsistensen hos en produkts prestanda, så väl som funktionella och fysiska attribut med avseende på krav, design och driftinformation genom dess livscykel. Hur konfigurationshantering implementeras i ett projekt har en avsevärd betydelse för huruvida projektet kommer att lyckas eller ej. Configuration management är dock ökänt för att vara svårt att implementera på ett bra sätt, d.v.s. så att det ökar prestandan och minskar risken i projekt. Det som fungerar bra inom en bransch kan vara svårt att implementera eller fungerar inte i en annan. Målet med denna studie är presentera en process for optimering av konfigurationshanteringsprocesser där ett telekomföretag använts som en fallstudie. Telekomföretaget genomgår en stor upprusting av sitt kund-system. Företaget har stora problem med kvalitén på den mjukvara de tar fram och att möta levaranstidpunkter, och vill därför förbättra sina processer för att komma till rätta med dessa problem. Data som samlades in inför optimeringen visar att CM-verktyg ej användes på korrekt vis, arbetsuppgifter som kunde automatiserats gjordes manuellt, och existerande processer byggde ej på best practices inom CM. De rekommenderade optimeringsstrategin var att implementera och använda ett versionhanteringssystem, och ändra processerna för att dra nytta av fördelarna med ett korrekt implementerat versionshanteringssystem. Detta ansågs dock vara en allt för stor förändring, så istället genomfördes ett antal mindre ändringar med mindre påverkan, med målet att förbättra kvalitetskontrollerna och minimera antalet fel som nådde produktion. Majoriteten av ändringarna hade syftet att replikera de mest grundläggande funktionaliteten hos ett versionhanteringsverktyg, så väl som att automatisera felbenägna manuella rutiner.
82

Förbättrad hantering av biobankssamtycke. : En kvalitativ studie om tidseffektiviserad handläggning av nej-talonger i en biobankverksamhet i Region Stockholm. / Improved handling of biobanking consent. : A qualitative study on time-efficient processing of biobanking consent at one biobank organization in the region of Stockholm.

Armus, Marija January 2023 (has links)
Enligt nuvarande biobankslag i Sverige ska handläggningen av patienternas biobankssamtycken ske omedelbart och utan onödig tidsfördröjning. Det är utifrån patientsäkerhetsperspektiv, viktigt att ha uppdaterade biobankssamtycken i laboratorieinformationssystemet eftersom korrekt uppdaterade samtycken är avgörande för leverans av säker vård. Tidigare behövde den berörda biobankverksamheten i genomsnitt 77 arbetsdagar för att handlägga en inkommen nej-talong, vilket var alldeles för lång tid. Det här förbättringsarbetet görs i syfte att tidseffektivisera processen för biobankssamtycke vid berörd biobankverksamhet i Region Stockholm. Nolans förbättringsmodell, Service Blueprint, 5 P´s analys och PDSA-hjulen användes som metod i det här förbättringsarbetet. Med hjälp av de införda förändringsidéerna under förbättringsarbetet minskades antalet handläggningsdagar för nej-talonger från 77 till 42 arbetsdagar på den berörda biobankverksamheten. Detta är en minskning med 35 arbetsdagar. Det här förbättringsarbetet resulterade i en mer tidseffektiviserad process för biobankssamtycke och en ny nationell rekommendation gällande antal handläggningsdagar för nej-talonger.  Studien av förbättringsarbetet syftade till att öka förståelsen om personalens erfarenheter av att genomföra förbättringsarbete med processen för biobankssamtycke. Studien genomfördes som en kvalitativ fallstudie med induktiv ansats. Kvalitativ datainsamling utfördes i form av åtta samtalsintervjuer. Studien visade att det fanns två viktiga faktorer som enligt berörd personal ledde till ökad teameffektivitet och tidseffektivisering av processen: systematisering av processen för biobankssamtycke och motivationsdrivande faktorer. Förändrat arbetssätt, ökade personalresurser och utvecklade kommunikationskanaler bidrog till skapandet av en systematisk process vilket underlättade genomförandet av förbättringsarbetet och tidseffektiviserade processen. Motivationsdrivande faktorer som påverkade teameffektivitet under förbättringsarbetets gång var gruppstöd, agilt arbetssätt och visuell utvärdering av förbättringsarbetet. / According to the current Biobanking Act in Sweden, processing of patients' biobanking consent must occur immediately without unnecessary time delay. From a patient safety perspective, it’s important to have updated consents in laboratory information systems because correctly updated consents are critical to the delivery of safe care. Previously, biobank organization where study was conducted needed an average of 77 working days to process an incoming biobanking consent, which was overly long. The aim of improvement work is to make the process for biobanking consent more time-efficient in one biobank organization in the Region of Stockholm. Nolan’s improvement model, Service Blueprint, 5 P's analysis and the PDSA-cycles were used as methods for this improvement work. With the help of five improvement interventions the number of processing days for biobanking consents was reduced from 77 to 42 working days which is a reduction of 35 days. This improvement work resulted in a more time-efficient process and a new recommendation regarding the number of processing days for biobanking consent.  The study of the improvement work aimed to increase the understanding of the staff's experiences regarding improvement work with the process for biobanking consent. The study was conducted as qualitative case study with an inductive approach. Qualitative data collection was carried out in the form of eight interviews. The study showed that the following factors led to increased team effectiveness and time efficiency of the process: systematization of the biobank consent process and motivational factors. Changed work methods, increased personnel resources, and developed communication channels contributed to the creation of the systematic process, which facilitated the implementation of the improvement work and made the process time efficient. Motivational factors that influenced team effectiveness under improvement work were group support, agile working methods and visual evaluation of improvement work.
83

Void Modeling in Resin Infusion

Brandley, Mark Wesley 01 June 2015 (has links) (PDF)
Resin infusion of composite parts has continually been reaching to achieve laminate quality equal to, or exceeding, the quality produced with prepreg in an autoclave. In order for this to occur, developers must understand the key process variables that go in to producing a laminate with minimal void content. The purpose of this research is to continue efforts in understanding 1) the effect of process conditions on the resultant void content, with a focus on resin infusion flow rate, 2) applying statistical metrics to the formation, location and size of voids formed, and 3) correlate these metrics with the local mechanical properties of the composite laminate. The variation in dispersion and formation of micro-voids and macro-voids varied greatly between the rates of flow the infusion occurred, especially in the non-crimp carbon fiber samples. Higher flow rates led to lower volumes of micro-voids in the beginning section of the carbon fiber laminates with macro-voids being introduced approximately half-way through infusion. This was determined to have occurred decreasing pressure gradient as the flow front moved away from the inlet. This variation in void content per location on the laminate was more evident in the carbon fiber samples than the fiberglass samples. Micro-voids follow void formation modeling especially when coupled with a pressure threshold model. Macro-void formation was also demonstrated to correlate strongly to void formation models when united with void mobility theories and pressure thresholds. There is a quick decrease in mechanical properties after the first 1-2% of voids signaling strength is mostly sensitive to the first 0-2% void content. A slight decrease in SBS was noticed in fiberglass laminates, A-F as v0 increased but not as drastically as represented in the NCF laminates, G and H. The lower clarity in the exponential trend could be due to the lack of samples with v0 greater than 0% but less than 1%. Strength is not well correlated to void content above 2% and could possibly be related to void morphololgy.
84

Process Intensification of Chemical Systems Towards a Sustainable Future

Zewei Chen (13161915) 27 July 2022 (has links)
<p>Cutting greenhouse gas emissions to as close to zero as possible, or ”net-zero”, may be the biggest sustainability goal to be achieved in the next 30 years. While chemical engineering evolved against the backdrop of an abundant supply of fossil resources for chemical production and energy, renewable energy resources such as solar and wind will find more usage in the future. This thesis work develops new concepts, methods and algorithms to identify and synthesize process schemes to address multiple aspects towards sustainable chemical and energy systems. Shale gas can serve as both energy resource and chemical feedstock for the transition period towards a sustainable economy, and has the potential to be a carbon source for the long term. The past two decades have seen increasing natural gas flaring and venting due to the lack of transforming or transportation infrastructure in emerging shale gas producing regions. To reduce carbon emission and wastage of shale resources, an innovative process hierarchy is identified for the valorization of natural gas liquids from shale gas at medium to small scale near the wellhead. This paradigm shift fundamentally changes the sequencing of various separation and reaction steps and results in dramatically simplified and intensified process flowsheets. The resulting processes could achieve over 20% lower capital with a higher recovery of products. Historically, heat energy is supplied to chemical plants by burning fossil resources. However, in future, with the emphasis on greenhouse gas reduction, renewable energy resources will find more usage. Renewable electricity from photovoltaic and wind has now become competitive with the electricity from fossil resources. Therefore, a major challenge for chemical engineering processes is how to use renewable electricity efficiently within a chemical plant and eliminate any carbon dioxide release from chemical plants. We introduce several decarbonization flowsheets for the process to first convert natural gas liquids (NGLs) to  mainly ethylene in an energy intensive dehydrogenation reactor and subsequent conversion of ethylene into value-added and easy-to-transport liquid fuels. </p> <p><br></p> <p>Molecular separations are needed across many types of industries, including oil and gas, food, pharmaceutical, and chemical industries. In a chemical plant, 40%–60% of energy and capital cost is tied to separation processes. For widespread use of membrane-based processes for high recovery and purity products from gaseous and liquid mixtures on an industrial scale, availability of models that allow the use of membrane cascades at their optimal operating modes is desirable towards sustainable separation systems. This will also enable proper comparison of membrane performance vis-a-vis other competing separation technologies. However, such a model for multicomponent fluid separation has been missing from the literature. We have developed an MINLP global optimization algorithm that guarantees the identification of minimum power consumption of multicomponent membrane cascades. The proposed optimization algorithm is implemented in GAMS and is demonstrated to have the capability to solve up to 4-component and 5-stage membrane cascades via BARON solver, which is significantly more advantageous than the state-of-the-art processes. The model is currently being further developed to include optimization of total cost including capital. Such a model holds the promise to be useful for the development in implementation of energy-efficient separation plants with least carbon footprint. This thesis work also addresses important topics in separation including dividing wall columns and water desalination. </p>
85

Processing of toughened cyanate ester matrix composites

Rau, Anand V. 06 June 2008 (has links)
This investigation explored the feasibility of recently developed toughened cyanate ester networks as candidate materials for high performance composite matrix applications. The resin investigated was a Bisphenol-A cyanate ester toughened with hydroxy functionalized phenolphthalein based amorphous poly(arylene ether sulfone). The thermoplastic modified toughened networks exhibited improvement in the fracture toughness over the base cyanate ester networks without significant reductions in mechanical properties or glass transition temperature. Void free, unidirectional carbon fiber prepreg was successfully manufactured with the toughened cyanate resin using a solventless hot-melt technique. The resin mass fraction of the prepregs was between 31 and 35%. The carbon fiber, toughened cyanate ester prepreg was fabricated into composite panels for mechanical and physical testing. The cure cycle used to manufacture the composite laminates was developed with the aid of a process simulation model developed by Loos and Springer. In order to accurately simulate the resin curing and flow processes, the cure reaction kinetics and melt viscosity was characterized as a function of temperature and degree of cure and input into the simulation model. The model generated cure cycle was used in the manufacture 8-ply unidirectional and 16-ply quasi-isotropic composite laminates. The manufactured laminates were well consolidated to the specified fiber volume fraction between 59 and 60%. Photomicrographs showed that the laminates are void free, the fiber and resin distribution is uniform and fiber wet-out is very good. Mechanical tests were performed to measure the impact damage resistance and shear properties of the toughened cyanate ester resin composites. The results show improvements in impact damage resistance compared with the commonly used hot-melt epoxy resin composites. The influence of processing on performance was observed from the results of shear tests. Carbon fabric composite panels were manufactured by liquid molding processes (resin transfer molding and resin film infusion), with a series of four toughened cyanate ester resins generated by varying the concentration and the molecular weight of the toughener. The panels were subjected to physical, damage tolerance, and fracture toughness tests. The results of physical testing indicate consistently uniform quality, and the void content was found to be less than 2%. The toughened cyanate ester composites exhibited significantly improved impact damage resistance and tolerance compared with hot-melt epoxy systems. Marked increase in the mode II fracture toughness were observed with an increase in the concentration and the molecular weight of the toughener. / Ph. D.
86

Otimização de um processo industrial de produção de isopreno via redes neurais. / Optimization of an industrial process for isoprene production using neural networks.

Alves, Rita Maria de Brito 02 July 2003 (has links)
Este trabalho descreve a aplicação de redes neurais \"feed-forward\" com três camadas em diferentes áreas da Engenharia Química. O objetivo principal do projeto é a modelagem, simulação e posterior otimização do processo de produção de isopreno empregando técnicas de redes neurais em substituição as equações de modelagem fenomenológica. A planta industrial testada é a unidade de produção de isopreno da BRASKEM (antiga COPENE). O sistema consiste essencialmente de um reator de dimerização e uma série de colunas de destilação. Uma vez que redes neurais são capazes de aprender eficientemente o processo a partir de informações extraídas diretamente de dados da planta, para este trabalho o modelo de rede neural gerado foi construído a partir de dados históricos operacionais coletados a cada 15 minutos durante o período de 1 ano. Em uma primeira etapa é realizada a análise dos dados operacionais de modo a detectar e eliminar erros grosseiros e sistemáticos. Em seguida, a modelagem e simulação do processo são realizadas. O modelo de redes neurais gerado é, então, empregado na otimização qualitativa/quantitativa do processo, construindo um \"grid\" de busca detalhado da região de interesse, através um mapeamento completo da função objetivo no espaço das variáveis de decisão. A segunda etapa diz respeito à predição de azeótropos, visando um melhor entendimento do comportamento do sistema da seção de extração de isopreno. Nas duas etapas, a grande vantagem em utilizar modelos de redes neurais, além de ajustar dados, é a capacidade que estes apresentam em representar eficientemente sistemas multivariáveis, complexos e não lineares, aprendendo o sistema, sem o conhecimento das leis físicas e químicas que o regem. Comparações entre a predição dos modelos propostos e os dados experimentais foram executadas e resultados muito bons foram conseguidos do ponto de vista industrial. ) Esta metodologia fornece informações interessantes e de maior compreensão para a análise dos engenheiros de processo do que os procedimentos convencionais correspondentes. Além disso, este trabalho mostra que a metodologia de redes neurais é promissora para varias aplicações indústrias, tais como análise de dados, modelagem, simulação e otimização de processos, bem como predição de propriedades termodinâmicas. / This work describes the application of a three-layer feed-forward neural network (NN) in different areas of chemical engineering. The main objective of this study is to model, simulate and optimize a real industrial plant, using NN by replacing phenomenological models. The industrial process studied is the isoprene production unit from BRASKEM. The chemical process consists basically of a dimerization reactor and a separation column train. Since NNs are able to extract information from plant data in an efficient manner, for this work, the neural network model was built directly from historical plant data, which were collected every 15 minutes during a period of one year. These data were carefully analyzed in order to identify and eliminate gross error data and non-steady state operation data. The modeling using NN was carried out by parts in order to get information on intermediate streams. Then, the global model was built, by interconnecting each individual model, and used to simulate and optimize the process. The optimization procedure carries on a detailed grid search of the region of interest, by a full mapping of the objective function on the space of decision variables. The second stage of this work deals with the azeotropic prediction using also the neural network approach. The objective of this step was to obtain a better understanding of the system behavior in the isoprene extraction section. Since all the cases studied are non-linear, complex andmultivariable systems, NN approach appears as a technique of interest due to its capability of learning the system without knowledge of the physical and chemical laws that govern it. Comparisons between the model\'s prediction and the experimental data were performed and reasonable results were achieved from an industrial point of view. ) Using neural network approach provides more comprehensive information for an engineer\'s analysis than the conventional procedure. This work shows that the use of NN methodology is promising for several industrial applications, such as data analysis, modeling, simulation and optimization process, as well as thermodynamics properties prediction. However, success in obtaining a reliable and robust NN depends strongly on the choice of the variables involved, as well as the quality of available data set and the domain used for training purposes.
87

Advanced tabulation techniques for faster dynamic simulation, state estimation and flowsheet optimization

Abrol, Sidharth 14 October 2009 (has links)
Large-scale processes that are modeled using differential algebraic equations based on mass and energy balance calculations at times require excessive computation time to simulate. Depending on the complexity of the model, these simulations may require many iterations to converge and in some cases they may not converge at all. Application of a storage and retrieval technique, named in situ adaptive tabulation or ISAT is proposed for faster convergence of process simulation models. Comparison with neural networks is performed, and better performance using ISAT for extrapolation is shown. In particular, the requirement of real-time dynamic simulation is discussed for operating training simulators (OTS). Integration of ISAT to a process simulator (CHEMCAD®) using the input-output data only is shown. A regression technique based on partial least squares (PLS) is suggested to approximate the sensitivity without accessing the first-principles model. Different record distribution strategies to build an ISAT database are proposed and better performance using the suggested techniques is shown for different case studies. A modified ISAT algorithm (mISAT) is described to improve the retrieval rate, and its performance is compared with the original approach in a case study. State estimation is a key requirement of many process control and monitoring strategies. Different nonlinear state estimation techniques studied in the past are discussed with their relative advantages/disadvantages. A robust state estimation technique like moving horizon estimation (MHE) has a trade-off between accuracy of state estimates and the computational cost. Implementation of MHE based ISAT is shown for faster state estimation, with an accuracy same as that of MHE. Flowsheet optimization aims to optimize an objective or cost function by changing various independent process variables, subject to design and model constraints. Depending on the nonlinearity of the process units, an optimization routine can make a number of calls for flowsheet (simulation) convergence, thereby making the computation time prohibitive. Storage and retrieval of the simulation trajectories can speed-up process optimization, which is shown using a CHEMCAD® flowsheet. Online integration of an ISAT database to solve the simulation problem along with an outer-loop consisting of the optimization routine is shown using the sequential-modular approach. / text
88

Dinâmica, otimização e controle de processos de fermentação em estado sólido : desenvolvimento de metodologias em escala laboratorial

Fonseca, Rafael Frederico 04 May 2016 (has links)
Submitted by Bruna Rodrigues (bruna92rodrigues@yahoo.com.br) on 2016-10-05T13:21:28Z No. of bitstreams: 1 TeseRFF.pdf: 11255323 bytes, checksum: 6490de6218937c15e5bbdc6d9672a300 (MD5) / Approved for entry into archive by Marina Freitas (marinapf@ufscar.br) on 2016-10-20T18:15:52Z (GMT) No. of bitstreams: 1 TeseRFF.pdf: 11255323 bytes, checksum: 6490de6218937c15e5bbdc6d9672a300 (MD5) / Approved for entry into archive by Marina Freitas (marinapf@ufscar.br) on 2016-10-20T18:15:59Z (GMT) No. of bitstreams: 1 TeseRFF.pdf: 11255323 bytes, checksum: 6490de6218937c15e5bbdc6d9672a300 (MD5) / Made available in DSpace on 2016-10-20T18:16:05Z (GMT). No. of bitstreams: 1 TeseRFF.pdf: 11255323 bytes, checksum: 6490de6218937c15e5bbdc6d9672a300 (MD5) Previous issue date: 2016-05-04 / Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES) / Solid-state fermentation is characterized by the growth of microorganisms in absence of free water. In one hand, it is advantageous because simulates their natural environment, enabling the use of agro industrial residues in natura. On the other hand, it limits heat transfer between elements, restricting control over the temperature of the medium. In fact, the microbial growth and the product formation dynamics are directly affected by the environmental conditions and variations can be harmful to the process productivity. As a consequence, the temperature increase caused by metabolic heat needs to be avoided. Studies concerning the microbial dynamics dealing with these variations are scarce. Moreover, it was not found any control laws, with guarantee of stability, which was designed for a reference tracking and to minimize the disturbances effects. Thus, two fronts need to be addressed for the solid-state fermentation viability: the development of a mathematical model able to estimate the effects of environmental changes in the process; and a temperature control system able to handle the heat from microbial metabolism. The model was used in a computational algorithm in order to determine if there was a temperature profile that would be more favorable to the products formation. In this work two control laws were studied, a proportional integrative, because it is the most widespread in the industry, and a model base predictive controller, because of its multivariable control versatility. Both control laws were simulated and then implemented in an eleven liters agitated drum bioreactor. Some of the various methods for PI controller parameters settings had their performance and relative stability requirement evaluated. The one that was proved stable was implemented in the bioreactor. Due to the uncertainties of the fermentation process, a self-adjustment mechanism was added to the predictive controller, in spite of the developed mathematical model, in order to avoid some estimation mistakes caused by some non-estimated states of the real process. The controller achieved an adequate performance with this approach. The results showed that the microorganisms were more efficient at a constant 32°C temperature. In addition, both developed controllers presented appropriate results facing the fermentation process requirement, with mean deviances from the referential temperature below 0,6°C and a maximum error of 2,8°C. / Uma das características da fermentação em estado sólido é que ela ocorre na ausência de água livre. Isso a torna vantajosa por simular o ambiente natural dos microrganismos com possibilidade de uso de resíduos agroindustriais in natura. Por outro lado, dificulta a transferência de calor entre os elementos do processo e, com isso, a capacidade de controlar a temperatura do meio fica debilitada. Por sua vez, a dinâmica do crescimento microbiano e a formação de produtos de interesse estão diretamente relacionadas às condições ambientais, cujas variações podem ser prejudiciais à produtividade do processo. Ao mesmo tempo, o calor gerado pelo metabolismo microbiano aumenta a temperatura do processo, que necessita ser regulada. Estudos que revelam como os microrganismos se comportam frente essas variações são escassos. Além disso, não foram encontradas leis de controle para a FES, com garantia de estabilidade, a fim de se minimizar os efeitos dos distúrbios. Duas frentes se destacam para a viabilização da FES: o desenvolvimento de um modelo matemático capaz de estimar os efeitos das variações das condições ambientais na dinâmica do processo e um sistema de controle da temperatura apropriado para lidar com os distúrbios gerados pelo crescimento celular. Em conjunto com a modelagem matemática, foram empregados mecanismos computacionais para averiguar qual seria o perfil de temperaturas que mais favorece à formação dos produtos. Por sua vez, foram estudados dois tipos de controladores: os proporcionais integrativos, pela ampla aplicação industrial, e os preditivos baseados em modelo, pela versatilidade no controle multivariável. Os sistemas de controle foram testados em um biorreator de 11 litros de volume nominal. Dentre várias, algumas metodologias para ajuste dos controladores proporcionais integrativos foram avaliadas nos quesitos desempenho e estabilidade relativa durante a fase de simulações. A metodologia que se provou estável nos testes realizados foi implementada no biorreator. Já para o controlador preditivo, frente às incertezas do processo fermentativo, foi necessário desenvolver um mecanismo de auto ajuste do modelo desenvolvido, a fim de que os erros dos estados não estimados do processo real fossem compensados e o controlador tivesse um desempenho adequado. Os resultados mostraram que o microrganismo, Aspergilus niger 3T5B8, produz uma quantidade maior de metabólitos de interesse a uma temperatura constante de 32°C. Além disso, ambos controladores utilizados apresentaram resultados apropriados aos requisitos do processo fermentativo, ou seja, com desvio médio da temperatura de referência menor do que 0,6°C.
89

Otimização de um processo industrial de produção de isopreno via redes neurais. / Optimization of an industrial process for isoprene production using neural networks.

Rita Maria de Brito Alves 02 July 2003 (has links)
Este trabalho descreve a aplicação de redes neurais \"feed-forward\" com três camadas em diferentes áreas da Engenharia Química. O objetivo principal do projeto é a modelagem, simulação e posterior otimização do processo de produção de isopreno empregando técnicas de redes neurais em substituição as equações de modelagem fenomenológica. A planta industrial testada é a unidade de produção de isopreno da BRASKEM (antiga COPENE). O sistema consiste essencialmente de um reator de dimerização e uma série de colunas de destilação. Uma vez que redes neurais são capazes de aprender eficientemente o processo a partir de informações extraídas diretamente de dados da planta, para este trabalho o modelo de rede neural gerado foi construído a partir de dados históricos operacionais coletados a cada 15 minutos durante o período de 1 ano. Em uma primeira etapa é realizada a análise dos dados operacionais de modo a detectar e eliminar erros grosseiros e sistemáticos. Em seguida, a modelagem e simulação do processo são realizadas. O modelo de redes neurais gerado é, então, empregado na otimização qualitativa/quantitativa do processo, construindo um \"grid\" de busca detalhado da região de interesse, através um mapeamento completo da função objetivo no espaço das variáveis de decisão. A segunda etapa diz respeito à predição de azeótropos, visando um melhor entendimento do comportamento do sistema da seção de extração de isopreno. Nas duas etapas, a grande vantagem em utilizar modelos de redes neurais, além de ajustar dados, é a capacidade que estes apresentam em representar eficientemente sistemas multivariáveis, complexos e não lineares, aprendendo o sistema, sem o conhecimento das leis físicas e químicas que o regem. Comparações entre a predição dos modelos propostos e os dados experimentais foram executadas e resultados muito bons foram conseguidos do ponto de vista industrial. ) Esta metodologia fornece informações interessantes e de maior compreensão para a análise dos engenheiros de processo do que os procedimentos convencionais correspondentes. Além disso, este trabalho mostra que a metodologia de redes neurais é promissora para varias aplicações indústrias, tais como análise de dados, modelagem, simulação e otimização de processos, bem como predição de propriedades termodinâmicas. / This work describes the application of a three-layer feed-forward neural network (NN) in different areas of chemical engineering. The main objective of this study is to model, simulate and optimize a real industrial plant, using NN by replacing phenomenological models. The industrial process studied is the isoprene production unit from BRASKEM. The chemical process consists basically of a dimerization reactor and a separation column train. Since NNs are able to extract information from plant data in an efficient manner, for this work, the neural network model was built directly from historical plant data, which were collected every 15 minutes during a period of one year. These data were carefully analyzed in order to identify and eliminate gross error data and non-steady state operation data. The modeling using NN was carried out by parts in order to get information on intermediate streams. Then, the global model was built, by interconnecting each individual model, and used to simulate and optimize the process. The optimization procedure carries on a detailed grid search of the region of interest, by a full mapping of the objective function on the space of decision variables. The second stage of this work deals with the azeotropic prediction using also the neural network approach. The objective of this step was to obtain a better understanding of the system behavior in the isoprene extraction section. Since all the cases studied are non-linear, complex andmultivariable systems, NN approach appears as a technique of interest due to its capability of learning the system without knowledge of the physical and chemical laws that govern it. Comparisons between the model\'s prediction and the experimental data were performed and reasonable results were achieved from an industrial point of view. ) Using neural network approach provides more comprehensive information for an engineer\'s analysis than the conventional procedure. This work shows that the use of NN methodology is promising for several industrial applications, such as data analysis, modeling, simulation and optimization process, as well as thermodynamics properties prediction. However, success in obtaining a reliable and robust NN depends strongly on the choice of the variables involved, as well as the quality of available data set and the domain used for training purposes.
90

Aplikace procesní analýzy při řízení kvality a testování software / Application of the process analysis in quality assurance and software testing

Popelka, Vladimír January 2011 (has links)
This thesis deals with questions regarding quality assurance and software testing. The subject of its theoretical part is the specification of the general concept of quality, description of standards used in the field of software product quality evaluation and finally the evaluation of software development process itself. The thesis intends to introduce the theoretical framework of software quality assurance, especially the detailed analysis of the whole software testing branch. An added value to the theoretical part constitutes the characterization of procedural approach and selected methods used towards the improvement of processes. The practical part of the thesis comprises of the exemplification -- it shows the procedural approach at software quality management, applied to a selected IT company. The main aim of the practical part is to create a purposeful project for optimization of quality assurance and software testing processes. The core of the matter is to accomplish the process analysis of the present condition of software testing methodology. For the purpose of process analysis and optimization project, the models of key processes will be created; these processes will then be depicted based on defined pattern. The description of the state-of-the-art of software product quality assurance processes is further supplemented by the evaluation of such processes maturity. The project for optimization of software testing and quality assurance processes comes from the process analysis of the present condition of software testing methodology, as well as from the evaluation of procedural models maturity. The essence of processes optimization is the incorporation of change requests and innovative intentions of individual processes into the resulting state of methodology draft. For the measurement of selected quality assurance and software testing processes, the configuration of efficiency indicators and their application on particular processes is implemented. The research on the of the state-of-the-art, as well as the elaboration of this whole project for optimization of software testing and quality assurance processes runs in conformity with the principles of DMAIC model of Six Sigma method.

Page generated in 0.1212 seconds