• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 48
  • 33
  • 5
  • 3
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 100
  • 100
  • 31
  • 18
  • 16
  • 14
  • 13
  • 12
  • 12
  • 12
  • 12
  • 11
  • 10
  • 10
  • 10
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
61

Otimização de processos acoplados: programação da produção e corte de estoque / Optimization of coupled process: planning production and cutting stock

Carla Taviane Lucke da Silva 15 January 2009 (has links)
Em diversas indústrias de manufatura (por exemplo, papeleira, moveleira, metalúrgica, têxtil) as decisões do dimensionamento de lotes interagem com outras decisões do planejamento e programação da produção, tais como, a distribuição, o processo de corte, entre outros. Porém, usualmente, essas decisões são tratadas de forma isolada, reduzindo o espaço de soluções e a interdependência entre as decisões, elevando assim os custos totais. Nesta tese, estudamos o processo produtivo de indústrias de móveis de pequeno porte, que consiste em cortar placas grandes disponíveis em estoque para obter diversos tipos de peças que são processadas posteriormente em outros estágios e equipamentos com capacidades limitadas para, finalmente, comporem os produtos demandados. Os problemas de dimensionamento de lotes e corte de estoque são acoplados em um modelo de otimização linear inteiro cujo objetivo é minimizar os custos de produção, estoque de produtos, preparação de máquinas e perda de matéria-prima. Esse modelo mostra o compromisso existente entre antecipar ou não a fabricação de certos produtos aumentando os custos de estoque, mas reduzindo a perda de matéria-prima ao obter melhores combinações entre as peças. O impacto da incerteza da demanda (composta pela carteira de pedidos e mais uma quantidade extra estimada) foi amortizado pela estratégia de horizonte de planejamento rolante e por variáveis de decisão que representam uma produção extra para a demanda esperada no melhor momento, visando a minimização dos custos totais. Dois métodos heurísticos são desenvolvidos para resolver uma simplificação do modelo matemático proposto, o qual possui um alto grau de complexidade. Os experimentos computacionais realizados com exemplares gerados a partir de dados reais coletados em uma indústria de móveis de pequeno porte, uma análise dos resultados, as conclusões e perspectivas para este trabalho são apresentados / In the many manufacturing industries (e.g., paper industry, furniture, steel, textile), lot-sizing decisions generally arise together with other decisions of planning production, such as distribution, cutting, scheduling and others. However, usually, these decisions are dealt with separately, which reduce the solution space and break dependence on decisions, increasing the total costs. In this thesis, we study the production process that arises in small scale furniture industries, which consists basically of cutting large plates available in stock into several thicknesses to obtain different types of pieces required to manufacture lots of ordered products. The cutting and drilling machines are possibly bottlenecks and their capacities have to be taken into account. The lot-sizing and cutting stock problems are coupled with each other in a large scale linear integer optimization model, whose objective function consists in minimizing different costs simultaneously, production, inventory, raw material waste and setup costs. The proposed model captures the tradeoff between making inventory and reducing losses. The impact of the uncertainty of the demand, which is composed with ordered and forecasting products) was smoothed down by a rolling horizon strategy and by new decision variables that represent extra production to meet forecasting demands at the best moment, aiming at total cost minimization. Two heuristic methods are proposed to solve relaxation of the mathematical model. Randomly generated instances based on real world life data were used for the computational experiments for empirical analyses of the model and the proposed solution methods
62

Integração da otimização em tempo real com controle preditivo. / Integration of the optimization on-line with model predictive control.

Glauce Freitas de Souza 27 April 2007 (has links)
Este trabalho tem como objetivo principal o desenvolvimento de uma estratégia de integração da otimização com o controle preditivo multivariável em uma camada. Os problemas de controle e otimização econômica são resolvidos simultaneamente em um mesmo algoritmo. A função objetivo econômica foi inserida no controlador na sua forma diferencial, ou seja, o gradiente da função objetivo econômica. O método foi testado por simulação para o caso do sistema reator regenerador da UFCC (Unit of Fluid Catalytic Cracker). Esta dissertação descreve a estratégia de otimização integrada ao controlador preditivo cuja função objetivo incorpora componentes dinâmicos e estáticos. Para a determinação das condições ótimas do processo no estado estacionário do conversor (unidade de craqueamento catalítico) foi utilizado um modelo empírico do processo. A melhor trajetória para conduzir o processo para o seu ponto ótimo de operação, maximizando lucro ou produto de maior valor agregado, desde que não sejam violadas as restrições de processo, é predita utilizando um modelo dinâmico, obtido através de dados de testes em degrau em um modelo rigoroso. Este modelo linear possibilitou a obtenção das funções de transferência do processo e o modelo em variáveis de estado. O ponto ótimo que é obtido na execução deste algoritmo, leva em consideração a não violação das restrições das variáveis manipuladas e controladas do processo, tanto para o estado estacionário como para o transiente do problema. O problema de otimização não linear resultante é resolvido através de uma rotina de programação quadrática da biblioteca do Matlab. Uma segunda alternativa apresentada para a estratégia de otimização deste trabalho, é a inclusão do gradiente reduzido na função objetivo do controlador quando são observadas violações das restrições das variáveis controladas. Os resultados simulados através de um modelo não linear rigoroso (Moro&Odloak,1995) mostram um bom desempenho dos algoritmos aqui desenvolvidos tanto com relação aos benefícios econômicos como na estabilização da unidade. / This dissertation aims to develop a strategy to integrate the optimization problem of the plant into the model predictive controller in a one layer strategy, for the real time optimization or online optimization. The control and the optimization of the process are computed simultaneously in the same algorithm. The gradient of the economic objective function is included in the cost function of the controller instead of in its regular form. Thereby, this work describes a predictive control strategy, which can be classified as a one layer strategy and whose objective function has to be optimized obeying constraints, which incorporates dynamic and static components. The optimal conditions of the process in the steady state are defined through the use of an empirical process model. Furthermore, the best trajectory to be followed in order to reach the optimal conditions, without violating the constraints, maximizing profit or the production of its more valuable product, is predicted through the use of the dynamic model, that can be obtained through a plant step test. As a result transfer function and state space models are obtained. The optimal operation point is achieved through the execution of the proposed algorithm. Therefore, the solution to the optimization/control problem will always be in a feasible region, in other words, without violating the process manipulated or controlled variable constraints for both stationary and transient states of the problem. The non-linear optimization problem resulted from the implementation of the proposed algorithm is solved through the quadratic programming routine from the Matlab library. The second online optimization strategy proposed in this work is one that considers the reduced gradient method algorithm modified to evaluate the predicted trajectory. As a result, any violation of the manipulated or controlled variable constraints is prevented and this variable is not considered in the next step of the calculation of the predicted trajectory or even in the search direction of the optimization. Finally the simulations results obtained through the use of a nonlinear rigorous model (Moro&Odloak,1995) presents good performance for the algorithms here proposed, not only related to economic benefits, but also in order to stabilize the unit.
63

ANÁLISE PROBABILÍSTICA DO GERENCIAMENTO DA CONGESTÃO EM MERCADOS DE ENERGIA ELÉTRICA / PROBABILIST ANALYSIS OF THE MANAGEMENT OF THE CONGESTION IN MARKETS OF ELECTRIC ENERGY

Rodrigues, Anselmo Barbosa 15 August 2003 (has links)
Made available in DSpace on 2016-08-17T14:52:47Z (GMT). No. of bitstreams: 1 Anselmo Rodrigues.pdf: 589170 bytes, checksum: 7eda1a9d5bbe5a0ef8355bc4c4d26ca8 (MD5) Previous issue date: 2003-08-15 / Coordenação de Aperfeiçoamento de Pessoal de Nível Superior / The restructuring of the electricity industry has caused an increase in the number of commercial transactions carried out in energy markets. These transactions are defined by market forces without considering the operational constraints of the transmission system. As a consequence, there are transactions that cause congestion in the transmission network, that means, violations of operational limits in one or more circuits of the transmission system. In this way, the congestion in the transmission system must be eliminated by using corrective actions, such as redispatch of generation/transactions and operation of control flow devices, to avoid cascading outages with uncontrolled loss of load. Currently, the majority of methodologies used in congestion management are based on deterministic models. It has been justified because of the complexity associated with the application of probabilistic models in generation/transmission systems. Nevertheless, some models have been developed to carry out probabilistic analysis of the congestion management. Usually, they are based on the Monte Carlo Method with nonsequential simulation and they only include bilateral transactions. However, multilateral transactions are also essential for the existence of the energy markets. The multilateral transactions reduce the financial risks associated with commercial transactions and allow the customers to have access to the energy providers. Additionally by ignoring multilateral transactions, the existing probabilistic models for the congestion management include only not-free-cost corrective actions, such as generation redispatch and transaction curtailments. On the other hand, free-cost corrective actions, such as phase shifting transformers and FACTS devices, can provide low cost solutions to eliminate congestion in interconnections of the transmission system. This condition is caused by the delay in carrying out reinforcements in the transmission systems due to financial and environmental constraints. Finally, it must be noted that only probabilistic indices based in expected values are evaluated by the probabilistic models of congestion management. However, system operators have difficulty in interpreting probabilistic indices based only in expected values. Therefore, it is necessary to develop new indices to carry out probabilistic analysis of congestion management. These new indices must consider traditionally accepted operational criteria and they must be easily interpreted by the system operators. This research has as its objective the development of models and techniques to carry out the probabilistic analysis of congestion management. The proposed models and techniques consider the following aspects associated with congestion management: the modeling of multilateral transactions, phase shifting transformers and the definition of Well-Being Indices to assess the reliability of the commercial transactions. These indices, allow the establishment of a link between the operational criteria traditionally used and the stochastic model of the electrical network. The models and indices, proposed in this research, have been based on the Monte Carlo Method with non-sequential simulation and in the linearized optimal power flow. The optimal power flow problems associated with the congestion management have been solved using the Primal-Dual Interior-Point Method. The practical application and the validation of the models and indices proposed in this research have been carried out in two systems: the IEEE System, proposed in 1996, for Reliability Studies. The main conclusions obtained with the application of the proposed models and techniques in the IEEE system are: multilateral congestion management can improve the reliability of commercial transactions, load profiles have significant effects on the Well-Being indices of the transactions, the base case condition has great impact in the Well-Being indices associated with a set of transactions and the operation of phase-shifting transformers and can decrease significantly the curtailments in the commercial transactions. / A reestruturação da indústria de eletricidade causou um aumento no número de transações comerciais efetuadas em mercados de energia. Estas transações são definidas por forças de mercado sem considerar restrições operacionais do sistema de transmissão. Consequentemente existem transações comerciais que causam congestão no sistema de transmissão, ou seja, resultam em violações de limites operacionais em um ou mais circuitos do sistema de transmissão. Desta forma, a congestão no sistema de transmissão deve ser eliminada usando-se ações corretivas, tais como redespacho de geração/transações e operação de dispositivos de controle de fluxo, para evitar contingências em cascata com perda de carga descontrolada. Atualmente, a maioria das metodologias usadas no gerenciamento da congestão se baseia em métodos determinísticos. Isto tem sido justificado devido a complexidade associada com a aplicação de modelos probabilísiticos em sistemas de geração/transmissão. Apesar disto, alguns modelos foram desenvolvidos para realizar uma análise probabilística do gerenciamento da congestão. Estes modelos geralmente se baseiam no método de Monte Carlo com Simulação Não-Sequencial e somente incluem transações bilaterais. Entretanto, transações multilaterais são também de grande importância para a existência dos mercados de energia. Os contratos multilaterais reduzem os riscos financeiros associados com transações comerciais e permitem que os consumidores tenham acesso aos fornecedores de energia. Além de não considerarem transações multilaterais, os modelos probabilísticos existentes para o gerenciamento da congestão somente incluem ações corretivas não-livres de custo, tais como redespacho da geração e cortes nas transações. Por outro lado, ações corretivas livres de custo, tais como transformadores defasadores e dispositivos FACTS, podem fornecer soluções de baixo custo para eliminar a congestão nas interligações do sistema de transmissão. Esta condição é causada pelo atraso na realização de reforços no sistema de transmissão devido a restrições financeiras e ambientais. Finalmente, observa-se que apenas índices probabilísticos que se baseiam em valores esperados são calculados pelos modelos probabilísticos de gerenciamento da congestão. Entretanto, operadores do sistema tem dificuldade em interpretar índices probabilísticos que se baseiam em valores esperados. Devido a isto, é necessário desenvolver novos índices para realizar uma análise probabilística do gerenciamento da congestão. Estes novos índices devem considerar critérios de avaliação tradicionalmente aceitos e serem facilmente interpretados pelos operadores do sistema. Este trabalho de pesquisa tem como objetivo desenvolver modelos e técnicas para realizar a análise probabilística do gerenciamento da congestão. Os modelos e técnicas propostos neste trabalho consideraram os seguintes aspectos associados com o gerenciamento da congestão: modelagem de transações multilaterais e transformadores defasadores no gerenciamento da congestão e a definição de índices de robustez para analisar a confiabilidade das transações comerciais. Estes índices permitem estabelecer um elo entre critérios de operação tradicionalmente usados e a modelagem probabilística da rede elétrica. Os modelos e índices propostos neste trabalho de pesquisa se baseiam no Método de Monte Carlo com simulação não-sequencial e no fluxo de potência ótimo linearizado. Os problemas de fluxo de potência ótimo associados com o gerenciamento da congestão foram resolvidos usando-se o Método de Pontos-Interiores Primal-Dual. A aplicação prática e validação dos modelos e índices propostos nesta pesquisa foi realizada através de diversos testes no sistema IEEE, proposto em 1996, para estudos de confiabilidade. As principais conclusões obtidas com a aplicação dos modelos e técnicas propostos no sistema IEEE são: o gerenciamento da congestão multilateral pode aumentar a confiabilidade das transações comerciais, perfis de carga tem efeitos significativos nos índices de robustez das transações comerciais, a condição do caso base tem grande impacto nos índices de robustez associados com um conjunto de transações e a operação de transfotmadores defasadores pode diminuir significativamente as interrupções nas transações comerciais.
64

Theoretical and computational issues for improving the performance of linear optimization methods / Aspectos teóricos e computacionais para a melhoria do desempenho de métodos de otimização linear

Pedro Augusto Munari Junior 31 January 2013 (has links)
Linear optimization tools are used to solve many problems that arise in our day-to-day lives. The linear optimization models and methodologies help to find, for example, the best amount of ingredients in our food, the most suitable routes and timetables for the buses and trains we take, and the right way to invest our savings. We would cite many other situations that involves linear optimization, since a large number of companies around the world base their decisions in solutions which are provided by the linear optimization methodologies. In this thesis, we propose theoretical and computational developments to improve the performance of important linear optimization methods. Namely, we address simplex type methods, interior point methods, the column generation technique and the branch-and-price method. In simplex-type methods, we investigate a variant which exploits special features of problems which are formulated in the general form. We present a novel theoretical description of the method and propose how to efficiently implement this method in practice. Furthermore, we propose how to use the primal-dual interior point method to improve the column generation technique. This results in the primal-dual column generation method, which is more stable in practice and has a better overall performance in relation to other column generation strategies. The primal-dual interior point method also oers advantageous features which can be exploited in the context of the branch-and-price method. We show that these features improves the branching operation and the generation of columns and valid inequalities. For all the strategies which are proposed in this thesis, we present the results of computational experiments which involves publicly available, well-known instances from the literature. The results indicate that these strategies help to improve the performance of the linear optimization methodologies. In particular for a class of problems, namely the vehicle routing problem with time windows, the interior point branch-and-price method proposed in this study was up to 33 times faster than a state-of-the-art implementation available in the literature / Ferramentas de otimização linear são usadas para resolver diversos problemas do nosso dia-a- dia. Os modelos e as metodologias de otimização linear ajudam a obter, por exemplo, a melhor quantidade de ingredientes na nossa alimentação, os horários e as rotas de ônibus e trens que tomamos, e a maneira certa para investir nossas economias. Muitas outras situações que envolvem otimização linear poderiam ser aqui citadas, já que um grande número de empresas em todo o mundo baseia suas decisões em soluções obtidas pelos métodos de otimização linear. Nesta tese, são propostos desenvolvimentos teóricos e computacionais para melhorar o desempenho de métodos de otimização linear. Em particular, serão abordados métodos tipo simplex, métodos de pontos interiores, a técnica de geração de colunas e o método branch-and-price. Em métodos tipo simplex, é investigada uma variante que explora as características especiais de problemas formulados na forma geral. Uma nova descrição teórica do método é apresentada e, também, são propostas técnicas computacionais para a implementação eciente do método. Além disso, propõe-se como utilizar o método primal-dual de pontos interiores para melhorar a técnica de geração de colunas. Isto resulta no método primal-dual de geração de colunas, que é mais estável na prática e tem melhor desempenho geral em relação a outras estratégias de geração de colunas. O método primal-dual de pontos interiores também oferece características vantajosas que podem ser exploradas em conjunto com o método branch-and-price. De acordo com a investigação realizada, estas características melhoram a operação de ramificação e a geração de colunas e de desigualdades válidas. Para todas as estratégias propostas neste trabalho, são apresentados os resultados de experimentos computacionais envolvendo problemas de teste bem conhecidos e disponíveis publicamente. Os resultados indicam que as estratégias propostas ajudam a melhorar o desempenho das metodologias de otimização linear. Em particular para uma classe de problemas, o problema de roteamento de veículos com janelas de tempo, o método branch-and-price de pontos interiores proposto neste estudo foi até 33 vezes mais rápido que uma implementação estado-da-arte disponível na literatura
65

FDIPA - algoritmo de pontos interiores e direções viáveis para otimização não-linear diferenciável: um estudo de parâmetros

Fonseca, Erasmo Tales 06 November 2015 (has links)
Submitted by Renata Lopes (renatasil82@gmail.com) on 2016-04-28T17:57:49Z No. of bitstreams: 1 erasmotalesfonseca.pdf: 866120 bytes, checksum: 042a0c3210df8046171b1593162cde44 (MD5) / Approved for entry into archive by Adriana Oliveira (adriana.oliveira@ufjf.edu.br) on 2016-05-02T01:13:24Z (GMT) No. of bitstreams: 1 erasmotalesfonseca.pdf: 866120 bytes, checksum: 042a0c3210df8046171b1593162cde44 (MD5) / Made available in DSpace on 2016-05-02T01:13:24Z (GMT). No. of bitstreams: 1 erasmotalesfonseca.pdf: 866120 bytes, checksum: 042a0c3210df8046171b1593162cde44 (MD5) Previous issue date: 2015-11-06 / CAPES - Coordenação de Aperfeiçoamento de Pessoal de Nível Superior / Neste trabalho apresentamos um estudo da influência dos parâmetros de um algoritmo de pontos interiores e direções viáveis para solução de problemas de otimização não linear. Esse algoritmo, denominado FDIPA, tem por objetivo encontrar dentre os pontos de um conjunto definido por restrições de igualdade e/ou desigualdade, aqueles que minimizam uma função diferenciável. O FDIPA baseia-se na resolução de dois sistemas de equações lineares com a mesma matriz de coeficientes, obtidos das condições necessárias de primeira ordem de Karush-Kuhn-Tucker. A partir de um ponto inicial no interior do conjunto viável, o FDIPA gera uma sequência de pontos também interiores ao conjunto. Em cada iteração, uma nova direção de descida é obtida e, em seguida, produz-se uma deflexão da direção de descida no sentido do interior do conjunto viável, de modo a se obter uma nova direção que seja de descida e viável. Realiza-se então uma busca linear para obter um novo ponto interior e garantir a convergência global do método. Uma família de algoritmos pode ser obtida variando-se as regras de atualização dos parâmetros do FDIPA. O estudo apresentado neste trabalho foi feito considerando-se um único algoritmo e com restrições de desigualdade somente. Testes numéricos apontaram para uma escolha de parâmetros que levou a um número menor de iterações na resolução dos problemas teste. / This work presents a study on the influence of the parameters of an interior point and feasible directions algorithm for solving non-linear problems. The algorithm, named FDIPA, aims to find among the points of a set defined by equality and/or inequality constraints, those which minimize a differentiable function. The FDIPA is based on two linear systems with the same coefficient matrix, obtained from the Karush-Kuhn-Tucker first order necessary conditions. From a initial point in the interior of the feasible set, FDIPA generates a sequence of points which are also interior to the set. At each iteration, FDIPA produces a descent direction which is deflected towards the interior of the feasible set in order to create a new descent and feasible direction. Then, a linear search is performed to get a new interior point and assure the global convergence of the method. A family of algorithms can be obtained varying the rules used to update the parameters of the FDIPA. The study presented here has been done considering just one particular algorithm and inequality constraints only. Numerical tests pointed to a certain choice of parameters which led to a fewer number of iterations when solving some test problems.
66

Supply Chain Management in Humanitarian Aid and Disaster Relief

Liu, Mingli January 2014 (has links)
Humanitarian aid and disaster relief are delivered in times of crises or natural disasters, such as after a conflict or in response to a hurricane, typhoon, or tsunami. Different from regular aid programs, aid and relief are provided to deal with emergency and immediate local areas, and to shelter affected people and refugees impacted by sudden traumatic events. There is evidence that natural and man-made disasters are increasing in numbers all around the world, affecting hundreds of millions of people every year. In spite of this fact, only in recent years – beginning in 2005 – has management of the supply chain of resources and materials for humanitarian aid and disaster relief been a topic of interest for researchers. Consequently, the academic literature in this field is comparatively new and still sparse, indicating a requirement for more academic studies. As a key part of the C-Change International Community-University Research Alliance (ICURA) project for managing adaptation to environmental change in coastal communities of Canada and the Caribbean, this thesis develops a framework and analytical model for domestic supply chain management in humanitarian aid and disaster relief in the event of severe storm and flooding in the Canadian C-Change community of Charlottetown, Prince Edward Island. In particular, the focus includes quantitative modeling of two specific aspects during the preparedness phase for emergency management: (1) inventory prepositioning and (2) transportation planning. In addition, this thesis proposes and analyses the characteristics of an effective supply chain management framework in practice to assist Canadian coastal communities in improving their preparation and performance in disaster relief efforts. The results indicate Charlottetown system effectiveness and decreased time to assist affected people are improved by distributing central emergency supply among more than one base station.
67

Optimization of Harvesting Natural Resources / Optimalizace těžby přírodních zdrojů

Chrobok, Viktor January 2008 (has links)
The thesis describes various modifications of the predator-prey model. The modifications are considering several harvesting methods. At the beginning a solution and a sensitivity analysis of the basic model are provided. The first modification is the percentage harvesting model, which could be easily converted to the basic model. Secondly a constant harvesting including a linearization is derived. A significant part is devoted to regulation models with special a focus on environmental applications and the stability of the system. Optimization algorithms for one and both species harvesting are derived and back-tested. One species harvesting is based on econometrical tools; the core of two species harvesting is the modified Newton's method. The economic applications of the model in macroeconomics and oligopoly theory are expanded using the methods derived in the thesis.
68

Supply Chain Operations Planning in a Carbon Cap and Trade Market

Mysyk, Jessica Marie 06 May 2020 (has links)
No description available.
69

Possibilities with Stirling Engine and High Temperature Thermal Energy Storage in Multi-Energy Carrier System : An analysis of key factors influencing techno-economic perspective of Stirling engine and high-temperature thermal energy storage

Myska, Martin January 2021 (has links)
Small and medium-scale companies are trying to minimise their carbon footprint and improve their cash flow, renewable installations are increasing all over the Europe and are expected to do so in following years. However, their dependency on the weather cause pressure on matching the production with demand. An option how to challenge this problem is by using energy storage. The aim of this project is to determine techno-economic benefits of Stirling engine and high temperature thermal energy storage for installation in energy user system and identify key factors that affect the operation of such system. In order to determine these factors simulations in Matlab were conducted. The Matlab linear programming tool Optisolve using dual-simplex algorithm was used. The sensitivity analysis was conducted to test the energy system behaviour. Economic evaluation was done calculating discounted savings. From the results, it can be seen the significant benefit of SE-HT-TES installation is the increased self-consumption of the electricity from PV installation. While the self-consumption in cases when there was no energy storage implemented was around 67 % and in one case as low as 50 % with the SE-HT-TES the value has increased up to 100 %. Energy cost savings are 4.7 % of the cost for the original data set and go up to 6.2 % when simulation with load shift was executed. Simulations have also shown that energy customer with predictable energy demand pattern can achieve higher savings with the very same system. It was also confirmed that for users whose private renewable production does not match load potential savings are 30 % higher compared to the system where energy load peak is matching the PV production peak. Simulations also shown that the customers located in areas with higher electricity price volatility can benefit from such system greatly.
70

Development and testing of algorithms for optimal thruster command distribution during MTG orbital manoeuvres

Sprengelmeyer, Lars January 2020 (has links)
An accurate satellite attitude and orbit control is a key factor for a successful mission. It guarantees for example sun acquisition on solar panels, fine pointing for optimal telescope usage or satellite lifting to reach higher orbits, when required. Furthermore attitude and orbit control is applied to compensate any occurring disturbances within the space environment. The problem tackled in the present thesis is the optimization of thruster commanding to perform spacecraft orbital manoeuvres. The main objective is to develop different algorithms that are suitable for on-board implementation and to compare their performance. For an optimal thruster command distribution the algorithms shall solve linear programming (or optimization) problems, more exact they shall compute thruster on-times to generate desired torques and/or forces, which are requested by the on-board software. In total three different algorithms are developed of which the first one is based on the pseudoinverse of a matrix, the second one is a variation of the Simplex method and the third one is based on Karmarkar’s algorithm, which belongs to the interior-point methods. The last two methods are well known procedures to solve linear programming problems and in theory they have been analyzed before. However this paper proves their practical application and industrial feasibility for orbital manoeuvres of the weather satellites of ESA’s MTG project and their scalability to any number of thrusters on a generic satellite for 6 degrees of freedom manoeuvres. There are 6 MTG satellites and each has 16 one-sided reaction control thrusters, placed at specific positions and pointed towards defined directions. Physical mechanisms limit the thrusters output to minimum on- and off-times. The focus of this thesis will be on the orbital transfer mode, due to the high disturbances that arise during four motor firing sessions at the apogee, executed to reach higher orbits and finally GEO. The firing sessions are performed by a liquid apogee engine and while this engine is in boost mode, the thrusters shall be used for attitude control only. The technique (nominal case) developed by OHB for this maneuver and currently operational uses 4 thrusters only, which are all pointing in the engine’s direction. They are also used to settle the fuel before the engine is turned on. For control the Pseudoinverse method is applied. If one of the 4 thrusters fails, the backup scenario takes place, which includes using 4 totally different thrusters and no fuel settling, due to their unfavorable position with respect to the engine. The initial idea of this work was to develop a controller for 6 thrusters, using only 2 of the 4 nominal case thrusters, to have a better control performance in the backup case. The Pseudoinverse method was developed by OHB before, thus only small changes needed to be applied to work with 6 thrusters. The two other algorithms, based on the Simplex and Karmarkar method, were completely developed from scratch. To analyze their performance several tests were executed. This includes unit tests on a simple computer hardware with different input, Monte Carlo simulations on a cluster to test if the algorithms are suitable for MTG orbital manoeuvres and the application to 12 thrusters, mounted on a generic satellite to generate torques and forces at the same time for 6 degrees of freedom manoeuvres. For each thruster configuration the worst case outputs are shown in so called minimum control authority plots. The performance analysis consists of the maximum and average deviation between requested and generated torque/force, the average computed thruster on-times, the algorithms computation(running) time and iteration steps. For MTG the test results clearly confirm that the usage of 6 thrusters leads to more accurate generated torques and better control authority, than using only 4 thrusters. The Simplex method stands out here in particular, showing excellence performance regarding torque precision. Nevertheless the accuracy goes at the expense of computation effort. While the Pseudoinverse method is very fast and needs only one iteration step, the Simplex is half a magnitude, the Karmarkar one magnitude slower. But the latter lead to lower thruster on-times in terms of firing duration and thus fuel consumption is reduced. Also it is shown that Simplex and Karmarkar can control 12 thrusters at the same time to generate torques and forces, which proves their scalability to any thruster distribution. In the end it comes to the question whether generating a more accurate torque/force or the computational effort, which is strongly hardware dependent, is more important. A decision which depends on the mission’s objective. This paper shows that all three implemented algorithms are able to handle attitude control in the MTG backup scenario and beyond.

Page generated in 0.1391 seconds