• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 8
  • 4
  • Tagged with
  • 15
  • 15
  • 11
  • 10
  • 5
  • 5
  • 5
  • 5
  • 5
  • 5
  • 5
  • 4
  • 4
  • 4
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Real time optimization in chemical process: evaluation of strategies, improvements and industrial application. / Otimização em tempo real aplicado a processos químicos: avaliação de estratégias, melhorias e implementação industrial.

José Eduardo Alves Graciano 03 December 2015 (has links)
The increasing economic competition drives the industry to implement tools that improve their processes efficiencies. The process automation is one of these tools, and the Real Time Optimization (RTO) is an automation methodology that considers economic aspects to update the process control in accordance with market prices and disturbances. Basically, RTO uses a steady-state phenomenological model to predict the process behavior, and then, optimizes an economic objective function subject to this model. Although largely implemented in industry, there is not a general agreement about the benefits of implementing RTO due to some limitations discussed in the present work: structural plant/model mismatch, identifiability issues and low frequency of set points update. Some alternative RTO approaches have been proposed in literature to handle the problem of structural plant/model mismatch. However, there is not a sensible comparison evaluating the scope and limitations of these RTO approaches under different aspects. For this reason, the classical two-step method is compared to more recently derivative-based methods (Modifier Adaptation, Integrated System Optimization and Parameter estimation, and Sufficient Conditions of Feasibility and Optimality) using a Monte Carlo methodology. The results of this comparison show that the classical RTO method is consistent, providing a model flexible enough to represent the process topology, a parameter estimation method appropriate to handle measurement noise characteristics and a method to improve the sample information quality. At each iteration, the RTO methodology updates some key parameter of the model, where it is possible to observe identifiability issues caused by lack of measurements and measurement noise, resulting in bad prediction ability. Therefore, four different parameter estimation approaches (Rotational Discrimination, Automatic Selection and Parameter estimation, Reparametrization via Differential Geometry and classical nonlinear Least Square) are evaluated with respect to their prediction accuracy, robustness and speed. The results show that the Rotational Discrimination method is the most suitable to be implemented in a RTO framework, since it requires less a priori information, it is simple to be implemented and avoid the overfitting caused by the Least Square method. The third RTO drawback discussed in the present thesis is the low frequency of set points update, this problem increases the period in which the process operates at suboptimum conditions. An alternative to handle this problem is proposed in this thesis, by integrating the classic RTO and Self-Optimizing control (SOC) using a new Model Predictive Control strategy. The new approach demonstrates that it is possible to reduce the problem of low frequency of set points updates, improving the economic performance. Finally, the practical aspects of the RTO implementation are carried out in an industrial case study, a Vapor Recompression Distillation (VRD) process located in Paulínea refinery from Petrobras. The conclusions of this study suggest that the model parameters are successfully estimated by the Rotational Discrimination method; the RTO is able to improve the process profit in about 3%, equivalent to 2 million dollars per year; and the integration of SOC and RTO may be an interesting control alternative for the VRD process. / O aumento da concorrência motiva a indústria a implementar ferramentas que melhorem a eficiência de seus processos. A automação é uma dessas ferramentas, e o Real Time Optimization (RTO) ou Otimização em Tempo Real, é uma metodologia de automação que considera aspectos econômicos e restrições de processos e equipamentos para atualizar o controle do processo, de acordo com preços de mercado e distúrbios. Basicamente, o RTO usa um modelo fenomenológico em estado estacionário para predizer o comportamento do processo, em seguida, otimiza uma função objetivo econômica sujeita a esse modelo. Embora amplamente utilizado na indústria, não há ainda um consenso geral sobre os benefícios da implementação do RTO, devido a algumas limitações discutidas no presente trabalho: incompatibilidade estrutural entre planta e modelo, problemas de identificabilidade e baixa frequência de atualização dos set points. Algumas metodologias de RTO foram propostas na literatura para lidar com o problema da incompatibilidade entre planta e modelo. No entanto, não há uma comparação que avalie a abrangência e as limitações destas diversas abordagens de RTO, sob diferentes aspectos. Por esta razão, o método clássico de RTO é comparado com metodologias mais recentes, baseadas em derivadas (Modifier Adaptation, Integrated System Optimization and Parameter estimation, and Sufficient Conditions of Feasibility and Optimality), utilizando-se o método de Monte Carlo. Os resultados desta comparação mostram que o método clássico de RTO é coerente, desde que seja proporcionado um modelo suficientemente flexível para se representar a topologia do processo, um método de estimação de parâmetros apropriado para lidar com características de ruído de medição e um método para melhorar a qualidade da informação da amostra. Já os problemas de identificabilidade podem ser observados a cada iteração de RTO, quando o método atualiza alguns parâmetros-chave do modelo, o que é causado principalmente pela ausência de medidas e ruídos. Por esse motivo, quatro abordagens de estimação de parâmetros (Discriminação Rotacional, Seleção Automática e Estimação de Parâmetros, Reparametrização via Geometria Diferencial e o clássico Mínimos Quadrados não-lineares) são avaliados em relação à sua capacidade de predição, robustez e velocidade. Os resultados revelam que o método de Discriminação Rotacional é o mais adequado para ser implementado em um ciclo de RTO, já que requer menos informação a priori, é simples de ser implementado e evita o sobreajuste observado no método de Mínimos Quadrados. A terceira desvantagem associada ao RTO é a baixa frequência de atualização dos set points, o que aumenta o período em que o processo opera em condições subotimas. Uma alternativa para lidar com este problema é proposta no presente trabalho, integrando-se o RTO e o Self-Optimizing Control (SOC) através de um novo algoritmo de Model Predictive Control (MPC). Os resultados obtidos com a nova abordagem demonstram que é possível reduzir o problema da baixa frequência de atualização dos set points, melhorando o desempenho econômico do processo. Por fim, os aspectos práticos da implementação do RTO são discutidos em um estudo de caso industrial, que trata de um processo de destilação com bomba de calor, localizado na Refinaria de Paulínia (REPLAN - Petrobras). Os resultados deste estudo sugerem que os parâmetros do modelo são estimados com sucesso pelo método de Discriminação Rotacional; que o RTO é capaz de aumentar o lucro do processo em cerca de 3%, o equivalente a 2 milhões de dólares por ano; e que a integração entre SOC e RTO pode ser uma alternativa interessante para o controle deste processo de destilação.
12

Modelagem de interações musicais com dispositivos informáticos / Musical interactions modeling with computers

Furlanete, Fábio Parra 16 August 2018 (has links)
Orientador: Jônatas Manzolli / Acompanhado de 1 DVD / Tese (doutorado) - Universidade Estadual de Campinas, Instituto de Artes / Made available in DSpace on 2018-08-16T00:39:52Z (GMT). No. of bitstreams: 1 Furlanete_FabioParra_D.pdf: 3804356 bytes, checksum: 366c9164940cf5018b191cc60707d3d8 (MD5) Previous issue date: 2010 / Resumo: Este trabalho investiga o possível papel para o compositor em uma situação de interação musical coletiva e propõe estrategias para sua atuação nesse contexto. Apresenta exemplos dessas estrategias em trabalhos composicionais e implementa um desses trabalhos na forma de uma ferramenta digital que permite ao compositor modelar contextos interativos, elaborar regras de interação e interferir nos processos enquanto eles ocorrem. A implementação das ferramentas digitais é feita na forma de um sistema para modelagem sonora coletiva que usa o projeto de jogos digitais como modelo para interação musical entre agentes artificiais e jogadores humanos em rede. Nosso trabalho tem como foco as regras de interação e como elas podem ser projetadas gerar resultados esteticamente atraentes e que ao mesmo tempo não restrinjam excessivamente a autonomia criativa dos jogadores. Essas regras devem ser aplicáveis tanto no contexto da Arte-Educação quanto no da performance. Acreditamos que o conhecimento da área de design de jogos em rede é útil para o projeto de tais regras. / Abstract: This work investigates the possible role for the composer in a situation of collective musical interaction and proposes strategies for their action in this context. It presents examples of these strategies in compositional works and implements them in the form of computer tools that allows the composer to model interactive contexts, develop rules of interaction and interfere with the processes as they occur. The implementation of the computer tools is in the form of a system for collective sound shaping. It uses digital games design patterns as models for musical interaction between artificial agents and human players in a network. Our work aimed to the interaction rules and how they can be designed to provide interactions whose outcome is aesthetically appealing and, at the same time, to not restrict the creative autonomy of the players. These rules should be applicable both in the context of Art Education and in the performance. We believe that knowledge of the project area network gaming is useful for the design of such rules. / Doutorado / Doutor em Música
13

Multi-Quality Auto-Tuning by Contract Negotiation

Götz, Sebastian 13 August 2013 (has links) (PDF)
A characteristic challenge of software development is the management of omnipresent change. Classically, this constant change is driven by customers changing their requirements. The wish to optimally leverage available resources opens another source of change: the software systems environment. Software is tailored to specific platforms (e.g., hardware architectures) resulting in many variants of the same software optimized for different environments. If the environment changes, a different variant is to be used, i.e., the system has to reconfigure to the variant optimized for the arisen situation. The automation of such adjustments is subject to the research community of self-adaptive systems. The basic principle is a control loop, as known from control theory. The system (and environment) is continuously monitored, the collected data is analyzed and decisions for or against a reconfiguration are computed and realized. Central problems in this field, which are addressed in this thesis, are the management of interdependencies between non-functional properties of the system, the handling of multiple criteria subject to decision making and the scalability. In this thesis, a novel approach to self-adaptive software--Multi-Quality Auto-Tuning (MQuAT)--is presented, which provides design and operation principles for software systems which automatically provide the best possible utility to the user while producing the least possible cost. For this purpose, a component model has been developed, enabling the software developer to design and implement self-optimizing software systems in a model-driven way. This component model allows for the specification of the structure as well as the behavior of the system and is capable of covering the runtime state of the system. The notion of quality contracts is utilized to cover the non-functional behavior and, especially, the dependencies between non-functional properties of the system. At runtime the component model covers the runtime state of the system. This runtime model is used in combination with the contracts to generate optimization problems in different formalisms (Integer Linear Programming (ILP), Pseudo-Boolean Optimization (PBO), Ant Colony Optimization (ACO) and Multi-Objective Integer Linear Programming (MOILP)). Standard solvers are applied to derive solutions to these problems, which represent reconfiguration decisions, if the identified configuration differs from the current. Each approach is empirically evaluated in terms of its scalability showing the feasibility of all approaches, except for ACO, the superiority of ILP over PBO and the limits of all approaches: 100 component types for ILP, 30 for PBO, 10 for ACO and 30 for 2-objective MOILP. In presence of more than two objective functions the MOILP approach is shown to be infeasible.
14

Multi-Quality Auto-Tuning by Contract Negotiation

Götz, Sebastian 17 July 2013 (has links)
A characteristic challenge of software development is the management of omnipresent change. Classically, this constant change is driven by customers changing their requirements. The wish to optimally leverage available resources opens another source of change: the software systems environment. Software is tailored to specific platforms (e.g., hardware architectures) resulting in many variants of the same software optimized for different environments. If the environment changes, a different variant is to be used, i.e., the system has to reconfigure to the variant optimized for the arisen situation. The automation of such adjustments is subject to the research community of self-adaptive systems. The basic principle is a control loop, as known from control theory. The system (and environment) is continuously monitored, the collected data is analyzed and decisions for or against a reconfiguration are computed and realized. Central problems in this field, which are addressed in this thesis, are the management of interdependencies between non-functional properties of the system, the handling of multiple criteria subject to decision making and the scalability. In this thesis, a novel approach to self-adaptive software--Multi-Quality Auto-Tuning (MQuAT)--is presented, which provides design and operation principles for software systems which automatically provide the best possible utility to the user while producing the least possible cost. For this purpose, a component model has been developed, enabling the software developer to design and implement self-optimizing software systems in a model-driven way. This component model allows for the specification of the structure as well as the behavior of the system and is capable of covering the runtime state of the system. The notion of quality contracts is utilized to cover the non-functional behavior and, especially, the dependencies between non-functional properties of the system. At runtime the component model covers the runtime state of the system. This runtime model is used in combination with the contracts to generate optimization problems in different formalisms (Integer Linear Programming (ILP), Pseudo-Boolean Optimization (PBO), Ant Colony Optimization (ACO) and Multi-Objective Integer Linear Programming (MOILP)). Standard solvers are applied to derive solutions to these problems, which represent reconfiguration decisions, if the identified configuration differs from the current. Each approach is empirically evaluated in terms of its scalability showing the feasibility of all approaches, except for ACO, the superiority of ILP over PBO and the limits of all approaches: 100 component types for ILP, 30 for PBO, 10 for ACO and 30 for 2-objective MOILP. In presence of more than two objective functions the MOILP approach is shown to be infeasible.
15

Interactions Study of Self Optimizing Schemes in LTE Femtocell Networks

El-murtadi Suleiman, Kais 06 December 2012 (has links)
One of the enabling technologies for Long Term Evolution (LTE) deployments is the femtocell technology. By having femtocells deployed indoors and closer to the user, high data rate services can be provided efficiently. These femtocells are expected to be depolyed in large numbers which raises many technical challenges including the handover management. In fact, managing handovers in femtocell environments, with the conventional manual adjustment techniques, is almost impossible to keep pace with in such a rapidly growing femtocell environment. Therefore, doing this automatically by implementing Self Organizing Network (SON) use cases becomes a necessity rather than an option. However, having multiple SON use cases operating simultaneously with a shared objective could cause them to interact either negatively or positively. In both cases, designing a suitable coordination policy is critical in solving negative conflicts and building upon positive benefits. In this work, we focus on studying the interactions between three self optimization use cases aiming at improving the overall handover procedure in LTE femtocell networks. These self optimization use cases are handover, Call Admission Control (CAC) and load balancing. We develop a comprehensive, unified LTE compliant evaluation environment. This environment is extendable to other radio access technologies including LTE-Advanced (LTE-A), and can also be used to study other SON use cases. Various recommendations made by main bodies in the area of femtocells are considered including the Small Cell Forum, the Next Generation Mobile Networks (NGMN) alliance and the 3rd Generation Partnership Project (3GPP). Additionally, traffic sources are simulated in compliance with these recommendations and evaluation methodologies. We study the interaction between three representative handover related self optimization schemes. We start by testing these schemes separately, in order to make sure that they meet their individual goals, and then their mutual interactions when operating simultaneously. Based on these experiments, we recommend several guidelines that can help mobile network operators and researchers in designing better coordination policies. / Thesis (Master, Electrical & Computer Engineering) -- Queen's University, 2012-12-05 22:35:27.538

Page generated in 0.0662 seconds