61 |
A distributed computing architecture to enable advances in field operations and management of distributed infrastructureKhan, Kashif January 2012 (has links)
Distributed infrastructures (e.g., water networks and electric Grids) are difficult to manage due to their scale, lack of accessibility, complexity, ageing and uncertainties in knowledge of their structure. In addition they are subject to loads that can be highly variable and unpredictable and to accidental events such as component failure, leakage and malicious tampering. To support in-field operations and central management of these infrastructures, the availability of consistent and up-to-date knowledge about the current state of the network and how it would respond to planned interventions is argued to be highly desirable. However, at present, large-scale infrastructures are “data rich but knowledge poor”. Data, algorithms and tools for network analysis are improving but there is a need to integrate them to support more directly engineering operations. Current ICT solutions are mainly based on specialized, monolithic and heavyweight software packages that restrict the dissemination of dynamic information and its appropriate and timely presentation particularly to field engineers who operate in a resource constrained and less reliable environments. This thesis proposes a solution to these problems by recognizing that current monolithic ICT solutions for infrastructure management seek to meet the requirements of different human roles and operating environments (defined in this work as field and central sides). It proposes an architectural approach to providing dynamic, predictive, user-centric, device and platform independent access to consistent and up-to-date knowledge. This architecture integrates the components required to implement the functionalities of data gathering, data storage, simulation modelling, and information visualization and analysis. These components are tightly coupled in current implementations of software for analysing the behaviour of networks. The architectural approach, by contrast, requires they be kept as separate as possible and interact only when required using common and standard protocols. The thesis particularly concentrates on engineering practices in clean water distribution networks but the methods are applicable to other structural networks, for example, the electricity Grid. A prototype implementation is provided that establishes a dynamic hydraulic simulation model and enables the model to be queried via remote access in a device and platform independent manner.This thesis provides an extensive evaluation comparing the architecture driven approach with current approaches, to substantiate the above claims. This evaluation is conducted by the use of benchmarks that are currently published and accepted in the water engineering community. To facilitate this evaluation, a working prototype of the whole architecture has been developed and is made available under an open source licence.
|
62 |
Objeto de aprendizagem para o ensino de algoritmos solucionadores de problemas de otimização em redesLourenço, Wilson Da Silva 26 February 2015 (has links)
Submitted by Nadir Basilio (nadirsb@uninove.br) on 2015-07-17T15:18:49Z
No. of bitstreams: 1
Wilson da Silva Lourenco.pdf: 1321079 bytes, checksum: ea090b0df77d0c04ef1dde30e7b41558 (MD5) / Made available in DSpace on 2015-07-17T15:18:49Z (GMT). No. of bitstreams: 1
Wilson da Silva Lourenco.pdf: 1321079 bytes, checksum: ea090b0df77d0c04ef1dde30e7b41558 (MD5)
Previous issue date: 2015-02-26 / The network optimization problems (NOP) are common to several areas such as engineering, transport and telecommunications, and have been objects of intense research and studies. Among the classical NOP are the problems of Shortest Path (SPP), Max Flow (MFP) and Traveling Salesman (TSP), which are usually studied in undergraduate and graduate courses such as Industrial Engineering, Computer Science, Information Systems and Logistics, with the use of resources such as chalk and blackboard that hinder the teacher's work, in the sense of showing the functioning of algorithms that solve these problems while maintaining students' motivation for learning. In this context, it is proposed in this research, a computational tool, characterized as a Learning Object (OA) and called TASNOP - Teaching Algorithms for Solving Network Optimization Problems, whose purpose is to contribute to students' understanding about concepts from NOP and, mainly, the functioning of algorithms A*, Greedy Search and Dijkstra used for resolution of SPP, Ford-Fulkerson employed in the resolution of MFP and the Nearest Neighbor to solve the TSP. It is important to highlight that the proposed OA can be accessed through web and also employed in distance learning environments (DLE). Experiments conducted in 2014 with 129 students of Computer Science, from which 51 performed an exercise using the TASNOP and 78 without this tool, confirm that students who used the TASNOP performed better in solving the proposed exercise, corroborating the idea that the OA helped to improve their understanding about the algorithms discussed in this research. In addition, the 51 students who employed the TASNOP answered a questionnaire about it use and, the answers indicated that the TASNOP shows a potential to be used as a learning support tool. / Os problemas de otimização em redes (POR) são comuns a diversas áreas como engenharia, transportes e telecomunicações, e têm sido objetos de intensas pesquisas e estudos. Entre os POR clássicos estão os problemas de Caminho Mínimo (PCM), Fluxo Máximo (PFM) e Caixeiro Viajante (PCV), os quais normalmente são estudados em cursos de graduação e pós-graduação tais como Engenharia de Produção, Ciência da Computação, Sistemas de Informação e Logística, com a utilização de recursos como giz e lousa, o que dificulta o trabalho do professor, no sentido de mostrar o funcionamento dos algoritmos que solucionam esses problemas, mantendo a motivação dos alunos para a aprendizagem. Neste contexto, propõe-se nesta pesquisa, uma ferramenta computacional, caracterizada como um Objeto de Aprendizagem (OA) denominado TASNOP - Teaching Algorithms for Solving Network Optimization Problems, cuja finalidade é contribuir para compreensão dos alunos sobre conceitos de POR e, principalmente, sobre o funcionamento dos algoritmos A*, Busca Gulosa, e Dijkstra, usados para resolução do PCM, Ford-Fulkerson empregado na resolução de PFM e o algoritmo Vizinho mais Próximo para resolução do PCV. É importante ressaltar que o OA proposto pode ser acessado via web e, inclusive, ser acoplado em ambientes de ensino a distância (EaD). Experimentos realizados no ano de 2014 envolvendo 129 alunos do curso de Ciência da Computação, dos quais 51 resolveram um exercício com o uso do TASNOP e 78 sem o seu uso, permitiram verificar que os alunos que utilizaram o TASNOP obtiveram melhor desempenho na resolução do exercício proposto, corroborando a ideia de que o OA contribuiu para melhorar suas compreensões acerca dos algoritmos abordados nesta pesquisa. Em adição, os 51 alunos que usaram o TASNOP responderam a um questionário sobre o seu uso e, com base nessas respostas, ficou evidente o potencial do TASNOP como uma ferramenta de apoio ao ensino.
|
63 |
Conserve and Protect Resources in Software-Defined Networking via the Traffic Engineering ApproachLi, Tao 13 October 2020 (has links)
Software Defined Networking (SDN) is revolutionizing the architecture and operation of computer networks and promises a more agile and cost-efficient network management. SDN centralizes the network control logic and separates the control plane from the data plane, thus enabling flexible management of networks. A network based on SDN consists of a data plane and a control plane. To assist management of devices and data flows, a network also has an independent monitoring plane. These coexisting network planes have various types of resources, such as bandwidth utilized to transmit monitoring data, energy spent to power data forwarding devices and computational resources to control a network. Unwise management, even abusive utilization of these resources lead to the degradation of the network performance and increase the Operating Expenditure (Opex) of the network owner. Conserving and protecting limited network resources is thus among the key requirements for efficient networking.
However, the heterogeneity of the network hardware and network traffic workloads expands the configuration space of SDN, making it a challenging task to operate a network efficiently. Furthermore, the existing approaches usually lack the capability to automatically adapt network configurations to handle network dynamics and diverse optimization requirements. Addtionally, a centralized SDN controller has to run in a protected environment against certain attacks. This thesis builds upon the centralized management capability of SDN, and uses cross-layer network optimizations to perform joint traffic engineering, e.g., routing, hardware and software configurations. The overall goal is to overcome the management complexities in conserving and protecting resources in multiple functional planes in SDN when facing network heterogeneities and system dynamics. This thesis presents four contributions: (1) resource-efficient network monitoring, (2) resource-efficient data forwarding, (3) using self-adaptive algorithms to improve network resource efficiency, and (4) mitigating abusive usage of resources for network controlling.
The first contribution of this thesis is a resource-efficient network monitoring solution. In this thesis, we consider one specific type of virtual network management function: flow packet inspection. This type of the network monitoring application requires to duplicate packets of target flows and send them to packet monitors for in-depth analysis. To avoid the competition for resources between the original data and duplicated data, the network operators can transmit the data flows through physically (e.g., different communication mediums) or virtually (e.g., distinguished network slices) separated channels having different resource consumption properties. We propose the REMO solution, namely Resource Efficient distributed Monitoring, to reduce the overall network resource consumption incurred by both types of data, via jointly considering the locations of the packet monitors, the selection of devices forking the data packets, and flow path scheduling strategies.
In the second contribution of this thesis, we investigate the resource efficiency problem in hybrid, server-centric data center networks equipped with both traditional wired connections (e.g., InfiniBand or Ethernet) and advanced high-data-rate wireless links (e.g., directional 60GHz wireless technology). The configuration space of hybrid SDN equipped with both wired and wireless communication technologies is massively large due to the complexity brought by the device heterogeneity. To tackle this problem, we present the ECAS framework to reduce the power consumption and maintain the network performance.
The approaches based on the optimization models and heuristic algorithms are considered as the traditional way to reduce the operation and facility resource consumption in SDN. These approaches are either difficult to directly solve or specific for a particular problem space. As the third contribution of this thesis, we investigates the approach of using Deep Reinforcement Learning (DRL) to improve the adaptivity of the management modules for network resource and data flow scheduling. The goal of the DRL agent in the SDN network is to reduce the power consumption of SDN networks without severely degrading the network performance.
The fourth contribution of this thesis is a protection mechanism based upon flow rate limiting to mitigate abusive usage of the SDN control plane resource. Due to the centralized architecture of SDN and its handling mechanism for new data flows, the network controller can be the failure point due to the crafted cyber-attacks, especially the Control-Plane- Saturation (CPS) attack. We proposes an In-Network Flow mAnagement Scheme (INFAS) to effectively reduce the generation of malicious control packets depending on the parameters configured for the proposed mitigation algorithm.
In summary, the contributions of this thesis address various unique challenges to construct resource-efficient and secure SDN. This is achieved by designing and implementing novel and intelligent models and algorithms to configure networks and perform network traffic engineering, in the protected centralized network controller.
|
64 |
Optimalizace přístupové sítě UMTS / Optimization of UMTS access networkHavlíček, Karel January 2008 (has links)
This master’s thesis deals with a UMTS radio access network optimization, containing radio interface analysis, services and procedures description, ways to correct parameters calculations and settings and other issues necessary for a correct access network operation. The goal is effective network operation, minimum costs and maximum performance and flexibility of the network. The optimization during system operation is important because of the system character. The UMTS system uses the WCDMA technology, where particular users share the same frequency band and they are distinguished from one another via code sequences. The capacity of such a system is then given by the interference level - each particular user increases the interference level by the value corresponding to his transmit power. The maximum cell capacity is determined by the maximum interference level at which users can still operate with required services, so it is related not only to the number of users, but also to their bit rate. The optimization allows effective usage of the system for different services with different requirements. The main optimization tool is the radio resource management, containing number of algorithms, such as admission control, which decides the acceptance or rejection of a new user demanding certain service, power control, which ensure that users transmit with a minimal power sufficient for required service, handover and cell selection algorithms etc. The major parameter used by these algorithms is a cell load factor, which is related to the interference level margin. There are several methods for the load factor estimation and some of them are described in this work. Some other optimization techniques are mentioned here, too. This work also contains a laboratory exercise proposal for radio resource management introduction using the OPNET Modeler network simulation tool.
|
65 |
Design of Mobility Cyber Range and Vision-Based Adversarial Attacks on Camera Sensors in Autonomous VehiclesRamayee, Harish Asokan January 2021 (has links)
No description available.
|
66 |
Optimization of information flows in telecommunication networks / Optimisation de flots d'information dans les réseaux de télécommunicationsLefebvre, Thibaut 27 June 2016 (has links)
Dans les réseaux de télécommunications, la demande croissante pour de nouveaux services, comme la diffusion de vidéos en continu ou les conférences en ligne, engendre un besoin pour des dispositifs de télécommunication où le même contenu est acheminé depuis un émetteur unique vers un groupe de récepteurs. Cette évolution ouvre la voie au développement de nouvelles techniques d'acheminement des données, comme le multicast qui laisse un nœud du réseau copier ses données d'entrée puis retransmettre ces copies, ou le codage réseau, qui est une technique permettant à un nœud d'effectuer des opérations de codage à partir de ses données d'entrée. Cette thèse traite de la mise en place de techniques de codage au sein d'un réseau multicast filaire. Nous formalisons certains problèmes qui apparaissent naturellement dans ce contexte grâce à la recherche opérationnelle et à des outils d'optimisation mathématique. Notre objectif est de développer des modèles et des algorithmes afin de calculer, au moins de manière approchée, certaines grandeurs qui ont vocation à être pertinentes dans le cadre de la comparaison de techniques d'acheminement de données dans un réseau de télécommunications. Nous évaluons ainsi, d'un point de vue à la fois théorique et expérimental, l'impact induit par l'introduction de techniques de codage au sein d'un réseau multicast. Nous nous concentrons en particulier sur des critères importants pour un opérateur de télécommunication, comme la maximisation du débit d'information entre une source et un ensemble de destinataires dans le réseau, la minimisation de la congestion sous contrainte de demande, ou la minimisation de la perte de débit ou du coût induit par l'acheminement des données dans un réseau soumis à des pannes. / In telecommunication networks, the increasing demand for new services, like video-streaming or teleconferencing, along with the now common situation where the same content is simultaneously requested by a huge number of users, stress the need for point to many data transmission protocols where one sender wishes to transmit the same data to a set of receivers. This evolution leads to the development of new routing techniques like multicast, where any node of the network can copy its received data and then send these copies, or network coding, which is a technique allowing any node to perform coding operations on its data. This thesis deals with the implementation of coding techniques in a wired multicast network. We formalize some problems naturally arising in this setting by using operations research and mathematical optimization tools. Our objective is to develop models and algorithms which could compute, at least approximately, some quantities whose purpose is to be relevant as far as forwarding data using either multicast and network coding in telecommunications networks is concerned. We hence evaluate, both in theory and numerically, the impact of introducing coding techniques in a multicast network. We specifically investigate relevant criteria, with respect to the field of telecommunications, like the maximum amount of information one can expect to convey from a source to a set of receivers through the network, the minimum congestion one can guarantee while satisfying a given demand, or the minimum loss in throughput or cost induced by a survivable routing in a network prone to failures.
|
67 |
Bridging Sim-to-Real Gap in Offline Reinforcement Learning for Antenna Tilt Control in Cellular Networks / Överbrygga Sim-to-Real Gap i inlärning av offlineförstärkning för antennlutningskontroll i mobilnätGulati, Mayank January 2021 (has links)
Antenna tilt is the angle subtended by the radiation beam and horizontal plane. This angle plays a vital role in determining the coverage and the interference of the network with neighbouring cells and adjacent base stations. Traditional methods for network optimization rely on rule-based heuristics to do decision making for antenna tilt optimization to achieve desired network characteristics. However, these methods are quite brittle and are incapable of capturing the dynamics of communication traffic. Recent advancements in reinforcement learning have made it a viable solution to overcome this problem but even this learning approach is either limited to its simulation environment or is limited to off-policy offline learning. So far, there has not been any effort to overcome the previously mentioned limitations, so as to make it applicable in the real world. This work proposes a method that consists of transferring reinforcement learning policies from a simulated environment to a real environment i.e. sim-to-real transfer through the use of offline learning. The approach makes use of a simulated environment and a fixed dataset to compensate for the underlined limitations. The proposed sim-to-real transfer technique utilizes a hybrid policy model, which is composed of a portion trained in simulation and a portion trained on the offline real-world data from the cellular networks. This enables to merge samples from the real-world data to the simulated environment consequently modifying the standard reinforcement learning training procedures through knowledge sharing between the two environment’s representations. On the one hand, simulation enables to achieve better generalization performance with respect to conventional offline learning as it complements offline learning with learning through unseen simulated trajectories. On the other hand, the offline learning procedure enables to close the sim-to-real gap by exposing the agent to real-world data samples. Consequently, this transfer learning regime enable us to establish optimal antenna tilt control which in turn results in improved coverage and reduced interference with neighbouring cells in the cellular network. / Antennlutning är den vinkel som dämpas av strålningsstrålen och det horisontella planet. Denna vinkel spelar en viktig roll för att bestämma täckningen och störningen av nätverket med angränsande celler och intilliggande basstationer. Traditionella metoder för nätverksoptimering förlitar sig på regelbaserad heuristik för att göra beslutsfattande för antennlutningsoptimering för att uppnå önskade nätverksegenskaper. Dessa metoder är dock ganska styva och är oförmögna att fånga dynamiken i kommunikationstrafiken. De senaste framstegen inom förstärkningsinlärning har gjort det till en lönsam lösning att lösa detta problem, men även denna inlärningsmetod är antingen begränsad till dess simuleringsmiljö eller är begränsad till off-policy offline inlärning. Hittills har inga ansträngningar gjorts för att övervinna de tidigare nämnda begränsningarna för att göra det tillämpligt i den verkliga världen. Detta arbete föreslår en metod som består i att överföra förstärkningsinlärningspolicyer från en simulerad miljö till en verklig miljö, dvs. sim-till-verklig överföring genom användning av offline-lärande. Metoden använder en simulerad miljö och en fast dataset för att kompensera för de understrukna begränsningarna. Den föreslagna sim-till-verkliga överföringstekniken använder en hybridpolicymodell, som består av en del utbildad i simulering och en del utbildad på offline-verkliga data från mobilnätverk. Detta gör det möjligt att slå samman prover från verklig data till den simulerade miljön och därmed modifiera standardutbildningsförfarandena för förstärkning genom kunskapsdelning mellan de två miljöernas representationer. Å ena sidan möjliggör simulering att uppnå bättre generaliseringsprestanda med avseende på konventionellt offlineinlärning eftersom det kompletterar offlineinlärning med inlärning genom osynliga simulerade banor. Å andra sidan möjliggör offline-inlärningsförfarandet att stänga sim-till-real-klyftan genom att exponera agenten för verkliga dataprov. Följaktligen möjliggör detta överföringsinlärningsregime att upprätta optimal antennlutningskontroll som i sin tur resulterar i förbättrad täckning och minskad störning med angränsande celler i mobilnätet.
|
68 |
Exploring the Depth-Performance Trade-Off : Applying Torch Pruning to YOLOv8 Models for Semantic Segmentation Tasks / Utforska kompromissen mellan djup och prestanda : Tillämpning av Torch Pruning på YOLOv8-modeller för uppgifter om semantisk segmenteringWang, Xinchen January 2024 (has links)
In order to comprehend the environments from different aspects, a large variety of computer vision methods are developed to detect objects, classify objects or even segment them semantically. Semantic segmentation is growing in significance due to its broad applications in fields such as robotics, environmental understanding for virtual or augmented reality, and autonomous driving. The development of convolutional neural networks, as a powerful tool, has contributed to solving classification or object detection tasks with the trend of larger and deeper models. It is hard to compare the models from the perspective of depth since they are of different structure. At the same time, semantic segmentation is computationally demanding for the reason that it requires classifying each pixel to certain classes. Running these complicated processes on resource-constrained embedded systems may cause performance degradation in terms of inference time and accuracy. Network pruning, a model compression technique, targeting to eliminate the redundant parameters in the models based on a certain evaluation rule, is one solution. Most traditional network pruning methods, structural or nonstructural, apply zero masks to cover the original parameters rather than literally eliminate the connections. A new pruning method, Torch-Pruning, has a general-purpose library for structural pruning. This method is based on the dependency between parameters and it can remove groups of less important parameters and reconstruct the new model. A cutting-edge research work towards solving several computer vision tasks, Yolov8 has proposed several pre-trained models from nano, small, medium to large and xlarge with similar structure but different parameters for different applications. This thesis applies Torch-Pruning to Yolov8 semantic segmentation models to compare the performance of pruning based on existing models with similar structures, thus it is meaningful to compare the depth of the model as a factor. Several configurations of the pruning have been explored. The results show that greater depth does not always lead to better performance. Besides, pruning can bring about more generalization ability for Gaussian noise at medium level, from 20% to 40% compared with the original models. / För att förstå miljöer från olika perspektiv har en mängd olika datorseendemetoder utvecklats för att upptäcka objekt, klassificera objekt eller till och med segmentera dem semantiskt. Semantisk segmentering växer i betydelse på grund av dess breda tillämpningar inom områden som robotik, miljöförståelse för virtuell eller förstärkt verklighet och autonom körning. Utvecklingen av konvolutionella neurala nätverk, som är ett kraftfullt verktyg, har bidragit till att lösa klassificerings- eller objektdetektionsuppgifter med en trend mot större och djupare modeller. Det är svårt att jämföra modeller från djupets perspektiv eftersom de har olika struktur. Samtidigt är semantisk segmentering beräkningsintensiv eftersom den kräver att varje pixel klassificeras till vissa klasser. Att köra dessa komplicerade processer på resursbegränsade inbäddade system kan orsaka prestandanedgång när det gäller inferenstid och noggrannhet. Nätverksbeskärning, en modellkomprimeringsteknik som syftar till att eliminera överflödiga parametrar i modellerna baserat på en viss utvärderingsregel, är en lösning. De flesta traditionella nätverksbeskärningsmetoder, både strukturella och icke-strukturella, tillämpar nollmasker för att täcka de ursprungliga parametrarna istället för att bokstavligen eliminera anslutningarna. En ny beskärningsmetod, Torch-Pruning, har en allmän användningsområde för strukturell beskärning. Denna metod är baserad på beroendet mellan parametrar och den kan ta bort grupper av mindre viktiga parametrar och återskapa den nya modellen. Ett banbrytande forskningsarbete för att lösa flera datorseenduppgifter, Yolov8, har föreslagit flera förtränade modeller från nano, liten, medium till stor och xstor med liknande struktur men olika parametrar för olika tillämpningar. Denna avhandling tillämpar Torch-Pruning på Yolov8 semantiska segmenteringsmodeller för att jämföra prestandan för beskärning baserad på befintliga modeller med liknande strukturer, vilket gör det meningsfullt att jämföra djupet som en faktor. Flera konfigurationer av beskärningen har utforskats. Resultaten visar att större djup inte alltid leder till bättre prestanda. Dessutom kan beskärning medföra en större generaliseringsförmåga för gaussiskt brus på medelnivå, från 20% till 40%, jämfört med de ursprungliga modellerna.
|
69 |
Convolutional Neural Network Optimization Using Genetic AlgorithmsReiling, Anthony J. January 2017 (has links)
No description available.
|
70 |
Demand Response in Smart GridZhou, Kan 16 April 2015 (has links)
Conventionally, to support varying power demand, the utility company must prepare to supply more electricity than actually needed, which causes inefficiency and waste. With the increasing penetration of renewable energy which is intermittent and stochastic, how to balance the power generation and demand becomes even more challenging. Demand response, which reschedules part of the elastic load in users' side, is a promising technology to increase power generation efficiency and reduce costs. However, how to coordinate all the distributed heterogeneous elastic loads efficiently is a major challenge and sparks numerous research efforts.
In this thesis, we investigate different methods to provide demand response and improve power grid efficiency.
First, we consider how to schedule the charging process of all the Plugged-in Hybrid Electrical Vehicles (PHEVs) so that demand peaks caused by PHEV charging are flattened. Existing solutions are either
centralized which may not be scalable, or decentralized based on
real-time pricing (RTP) which may not be applicable immediately for many markets.
Our proposed PHEV charging approach does not need
complicated, centralized control and can be executed online in a distributed manner.
In addition, we extend our approach and apply it to the distribution grid to solve the bus congestion and voltage drop problems by controlling the access probability of PHEVs. One of the advantages of our algorithm is that it does not need accurate predictions on base load and future users' behaviors. Furthermore, it is deployable even when the grid size is large.
Different from PHEVs, whose future arrivals are hard to predict, there is another category of elastic load, such as Heating Ventilation and Air-Conditioning (HVAC) systems, whose future status can be predicted based on the current status and control actions. How to minimize the power generation cost using this kind of elastic load is also an interesting topic to the power companies. Existing work usually used HVAC to do the load following or load shaping based on given control signals or objectives. However, optimal external control signals may not always be available. Without such control signals, how to make a tradeoff between the fluctuation of non-renewable power generation and the limited demand response potential of the elastic load, and to guarantee user comfort level, is still an open problem.
To solve this problem, we first model the temperature evolution process of a room and propose an approach to estimate the key parameters of the model.
Then, based on the model predictive control, a centralized and a distributed algorithm are proposed to minimize the fluctuation and maximize the user comfort level. In addition, we propose a dynamic water level adjustment algorithm to make the demand response always available in two directions. Extensive simulations based on practical data sets show that the proposed algorithms can effectively reduce the load fluctuation.
Both randomized PHEV charging and HVAC control algorithms discussed above belong to direct or centralized load shaping, which has been heavily investigated. However, it is usually not clear how the users are compensated by providing load shaping services. In the last part of this thesis, we investigate indirect load shaping in a distributed manner. On one hand, we aim to reduce the users' energy cost by investigating how to fully utilize the battery pack and the water tank for the Combined Heat and Power (CHP) systems. We first formulate the queueing models for the CHP systems, and then propose an algorithm based on the Lyapunov optimization technique which does not need any statistical information about the system dynamics. The optimal control actions can be obtained by solving a non-convex optimization problem. We then discuss when it can be converted into a convex optimization problem. On the other hand, based on the users' reaction model, we propose an algorithm, with a time complexity of O(log n), to determine the RTP for the power company to effectively coordinate all the CHP systems and provide distributed load shaping services. / Graduate
|
Page generated in 0.1101 seconds