• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 23
  • 6
  • 6
  • 4
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 62
  • 62
  • 14
  • 13
  • 12
  • 8
  • 8
  • 8
  • 8
  • 8
  • 7
  • 7
  • 6
  • 6
  • 6
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

Calibration of IDM Car Following Model with Evolutionary Algorithm

Yang, Zhimin 11 January 2024 (has links)
Car following (CF) behaviour modelling has made significant progress in both traffic engi-neering and traffic psychology during recent decades. Autonomous vehicles (AVs) have been demonstrated to optimise traffic flow and increase traffic stability. Consequently, sever-al car-following models have been proposed based on various car following criteria, leading to a range of model parameter sets. In traffic engineering, Intelligent Driving Model (IDM) are commonly used as microscopic traffic flow models to simulate a single vehicle's behav-iour on a road. Observational data can be employed to parameter calibrate IDM models, which enhances their practicality for real-world applications. As a result, the calibration of model parameters is crucial in traffic simulation research and typically involves solving an optimization problem. Within the given context, the Nelder-Mead(NM)algorithm, particle swarm optimization (PSO) algorithm and genetic algorithm (GA) are utilized in this study for parameterizing the IDM model, using abundant trajectory data from five different road conditions. The study further examines the effects of various algorithms on the IDM model in different road sections, providing useful insights for traffic simulation and optimization.:Table of Contents CHAPTER 1 INTRODUCTION 1 1.1 BACKGROUND AND MOTIVATION 1 1.2 STRUCTURE OF THE WORK 3 CHAPTER 2 BACKGROUND AND RELATED WORK 4 2.1 CAR-FOLLOWING MODELS 4 2.1.1 General Motors model and Gazis-Herman-Rothery model 5 2.1.2 Optimal velocity model and extended models 6 2.1.3 Safety distance or collision avoidance models 7 2.1.4 Physiology-psychology models 8 2.1.5 Intelligent Driver model 10 2.2 CALIBRATION OF CAR-FOLLOWING MODEL 12 2.2.1 Statistical Methods 13 2.2.2 Optimization Algorithms 14 2.3 TRAJECTORY DATA 21 2.3.1 Requirements of Experimental Data 22 2.3.2 Data Collection Techniques 22 2.3.3 Collected Experimental Data 24 CHAPTER 3 EXPERIMENTS AND RESULTS 28 3.1 CALIBRATION PROCESS 28 3.1.1 Objective Function 29 3.1.2 Errors Analysis 30 3.2 SOFTWARE AND METHODOLOGY 30 3.3 NM RESULTS 30 3.4 PSO RESULTS 37 3.4.1 PSO Calibrator 37 3.4.2 PSO Results 44 3.5 GA RESULTS 51 3.6 OPTIMIZATION PERFORMANCE ANALYSIS 58 CHAPTER 4 CONCLUSION 60 REFERENCES 62
42

Back analysis of rock mass properties in the regional fault zone under Lake Mälaren

Liu, Jiaqi January 2022 (has links)
The properties of the surrounding rock mass in underground projects have significant impacts on the design and construction. However, it is quite challenging to evaluate rock mass properties due to the great uncertainties of the geological conditions. Besides, even if techniques of field test have obtained a well development, the high expense of tests and scattering results always make it difficult to cover a large domain in a complex project. In recent years, because of the maturity of numerical analysis as well as the wide use of tunnel deformation measurements, displacement-based back analysis has become a popular and effective indirect method to estimate rock mass properties.The main purpose of this thesis was to perform a displacement-based back analysis on the in-situ stress ratio and Young’s modulus of the exploratory tunnel BP201, which constitutes the passage under the Lake Mälaren in the Stockholm Bypass project. The back analysis was carried out using the Pattern search method and the Simplex method. The error function was built according to the least square method, and the commercial finite element software Plaxis 2D was used to calculate theoretical deformations. Moreover, a sensitivity analysis was performed to study the influence of starting point and how other numerical model parameters affects the results of the back analysis.The two optimization algorithms used in this study provided an in-situ stress ratio and Young’s modulus with close estimations to the measured deformations. For the specific problem analysed in this thesis, it was found that the Simplex method was more suitable than the Pattern search method. It was also concluded that a better choice of starting point can improve the precision and efficiency of the back analysis. / Bergmassans egenskaper har en stor påverkan vid dimensioneringen och utformningen av förstärkningen i tunnlar och vid indelningen av uttagssekvenser vid tunneldrivning. Det är emellertid svårt att utvärdera bergmassans egenskaper till följd av stora osäkerheter i de geologiska förhållandena. Även om tekniker för provning i fält har utvecklats är de relativt kostsamma och det är svårt att täcka in hela den geologiska domänen i stora projekt. På senare år har utvecklingen av numeriska metoder och deformationsmätningar i tunnlar möjliggjort bestämning av bergmassans egenskaper genom deformationsbaserad bakåtanalys.Syftet med detta examenarbete var att utföra en deformationsbaserad bakåtanalys av in-situ spänningstillståndet och bergmassans elasticitets-modul i bypass tunneln BP 201 under passagen av sjön Mälaren i Stockholm, vilket är en del av projektet Förbifart Stockholm. I bakåt-analysen användes metoderna ”the Pattern Search Method” och ”the Simplex Metod”. Minsta kvadratmetoden användes som felfunktion. Tvådimensionella numeriska beräkningar av deformationerna i tunneln utfördes med det finita elementprogrammet Plaxis 2D. En känslighets-analys gjordes för att studera inverkan på resultatet vid val av startpunkt i bakåtanalysen och hur osäkerheter i övriga parametrar påverkade resultatet.De två optimeringsalgoritmerna som användes i denna studie resulterade i in-situ spänningar och en elasticitetsmodul som genererade beräknade deformationer nära de uppmätta. For det studerade problemet i detta arbete var ”the Simplex Method” lämpligare att använda än ”the Pattern Search Method”. Genom att välja ett lämpligare startvärde kunde också precisionen och effektiviteten i bakåtanalysen förbättras.
43

Navigating Uncertainty: Distributed and Bandit Solutions for Equilibrium Learning in Multiplayer Games

Yuanhanqing Huang (18361527) 15 April 2024 (has links)
<p dir="ltr">In multiplayer games, a collection of self-interested players aims to optimize their individual cost functions in a non-cooperative manner. The cost function of each player depends not only on its own actions but also on the actions of others. In addition, players' actions may also collectively satisfy some global constraints. The study of this problem has grown immensely in the past decades with applications arising in a wide range of societal systems, including strategic behaviors in power markets, traffic assignment of strategic risk-averse users, engagement of multiple humanitarian organizations in disaster relief, etc. Furthermore, with machine learning models playing an increasingly important role in practical applications, the robustness of these models becomes another prominent concern. Investigation into the solutions of multiplayer games and Nash equilibrium problems (NEPs) can advance the algorithm design for fitting these models in the presence of adversarial noises. </p><p dir="ltr">Most of the existing methods for solving multiplayer games assume the presence of a central coordinator, which, unfortunately, is not practical in many scenarios. Moreover, in addition to couplings in the objectives and the global constraints, all too often, the objective functions contain uncertainty in the form of stochastic noises and unknown model parameters. The problem is further complicated by the following considerations: the individual objectives of players may be unavailable or too complex to model; players may exhibit reluctance to disclose their actions; players may experience random delays when receiving feedback regarding their actions. To contend with these issues and uncertainties, in the first half of the thesis, we develop several algorithms based on the theory of operator splitting and stochastic approximation, where the game participants only share their local information and decisions with their trusted neighbors on the network. In the second half of the thesis, we explore the bandit online learning framework as a solution to the challenges, where decisions made by players are updated based solely on the realized objective function values. Our future work will delve into data-driven approaches for learning in multiplayer games and we will explore functional representations of players' decisions, in a departure from the vector form. </p>
44

Detailing Electrodynamics and Temperature for MRI in the Short Wavelength Regime / Numerical Simulations, Experimental Validation and Early Applications

Oberacker, Eva Irene 24 June 2024 (has links)
Um die Rolle der Temperatur in biologischen Systemen und bei Krankheiten zu definieren; für die Gewährleistung der Sicherheit, zur Erleichterung der Diagnose und zur sicheren Steuerung der Therapie, wird besseres Verständnis von durch hochfrequente (HF) elektromagnetische Wellen induzierter Erwärmung von Gewebe benötigt. Die Magnetresonanz (MR) ist eine unverzichtbare diagnostische Bildgebung. Eine zentrale Abgrenzung der Ultrahochfeld-MR (UHF-MR) ist der Einsatz höherer Frequenzen und kürzerer Wellenlängen. Die veränderte Ausbreitung von HF-Wellen im menschlichen Körper stellt die MR-Sicherheit vor Herausforderungen. Die kleinere Skala der HF-Erwärmung von elektrisch leitfähigen passiven Implantaten während der UHF-MRT wird in aktuellen Richtlinien nicht erfasst. Ich habe einen, auf die Untersuchung der Implantatsicherheit bei UHF-MRT zugeschnittenen, Ansatz entwickelt, der in Phantomen evaluiert, in Testobjekten validiert und zur Bewertung der MR-Sicherheit von kleinen okularen Tantalmarkern (OTMs) angewandt wurde. OTMs können für die MR bei Magnetfeldstärken ≤ 7,0 T als sicher angesehen werden. Die verkürzte Wellenlänge ermöglicht präzise lokalisierte Temperaturmanipulation wie Hyperthermie-Behandlungen von Krebs. Die Kombination der Stärken der UHF-MRT in diagnostischer Bildgebung und MR-Thermometrie (MRTh) mit spezieller Hardware und Algorithmen zur Hyperthermie-Behandlungsplanung ermöglicht eine einzigartige theranostische Herangehensweise mittels integrierten Geräts (HF-Applikator). Ziel ist die erstmalige Behandlung von Gehirntumoren mittels nicht-invasiver HF-Hyperthermie. Das Ergebnis dieser Arbeit ist die Etablierung des gesamten Arbeitsablaufs von grundlegenden EMF-Simulationen eines HF-Applikators mit patientenspezifischen Simulationsmodellen über die (Multifrequenz-)Behandlungsplanung und Leistungsbewertung bis zur Behandlungsüberwachung mittels MRTh. Für eine Anwendung der Erkenntnisse ist weitere Forschung im HF-Applikator-Design erforderlich. / To better define the role of temperature in biological systems and disease; for ensuring safety, facilitating diagnosis and safe-guiding therapy, we require approaches to study and manipulate temperature and characterize its effects. Better understanding of radiofrequency (RF) induced heating of tissue is therefore required. Magnetic resonance (MR) is an indispensable diagnostic imaging tool. A critical distinction of ultrahigh field MR (UHF-MR) is the use of higher RF and thus shorter wavelengths. The altered RF wave propagation in the human body poses challenges for MR safety considerations. The smaller scale, on which RF heating of electrically conductive passive implants occurs during UHF MRI, is not covered by current guidelines. I proposed a novel approach tailored to examining implant safety in UHF MRI. This approach was evaluated in phantoms, validated in test objects and applied to assess the MR safety of small ocular tantalum markers (OTMs). My results show that OTMs can be considered safe for MRI at magnetic field strengths ≤ 7.0 T. The short wave length regime supports precisely localized temperature manipulation, such as hyperthermia anticancer treatment. Combining the strength of UHF MRI in diagnostic imaging and MR thermometry (MRTh) with dedicated hardware and hyperthermia treatment planning algorithms affords a unique theranostic approach in a single integrated device (RF applicator). The goal is treatment of brain tumors, where to the best of our knowledge no focused RF hyperthermia treatment is currently available. The main accomplishment of this work is establishing the entire workflow from basic EMF simulations of an RF applicator with patient specific simulation models over (multifrequency) treatment planning and performance assessment, to treatment monitoring via MRTh. For translation of the insights obtained during this PhD thesis project, more research is warranted addressing remaining engineering challenges in the RF applicator design.
45

Entwicklung und Validierung eines Fragebogens zur Erfassung der kognitiven Dimension gesundheitsbezogener Lebensqualität (COQOL - COgnitive Quality Of Life) bei Menschen mit Demenz / Development and validation of a self-report instrument for measuring the cognitive dimension of Health-Related Quality of Life - the COQOL (COgnitive Quality Of Life) in patients with dementia

Werkmeister, Martin Lenard 19 May 2019 (has links)
No description available.
46

Realisierung einer Schedulingumgebung für gemischt-parallele Anwendungen und Optimierung von layer-basierten Schedulingalgorithmen / Development of a scheduling support environment for mixed parallel applications and optimization of layer-based scheduling algorithms

Kunis, Raphael 25 January 2011 (has links) (PDF)
Eine Herausforderung der Parallelverarbeitung ist das Erreichen von Skalierbarkeit großer paralleler Anwendungen für verschiedene parallele Systeme. Das zentrale Problem ist, dass die Ausführung einer Anwendung auf einem parallelen System sehr gut sein kann, die Portierung auf ein anderes System in der Regel jedoch zu schlechten Ergebnissen führt. Durch die Verwendung des Programmiermodells der parallelen Tasks mit Abhängigkeiten kann die Skalierbarkeit für viele parallele Algorithmen deutlich verbessert werden. Die Programmierung mit parallelen Tasks führt zu Task-Graphen mit Abhängigkeiten zur Darstellung einer parallelen Anwendung, die auch als gemischt-parallele Anwendung bezeichnet wird. Die Grundlage für eine effiziente Abarbeitung einer gemischt-parallelen Anwendung bildet ein geeigneter Schedule, der eine effiziente Abbildung der parallelen Tasks auf die Prozessoren des parallelen Systems vorgibt. Für die Berechnung eines Schedules werden Schedulingalgorithmen eingesetzt. Ein zentrales Problem bei der Bestimmung eines Schedules für gemischt-parallele Anwendungen besteht darin, dass das Scheduling bereits für Single-Prozessor-Tasks mit Abhängigkeiten und ein paralleles System mit zwei Prozessoren NP-hart ist. Daher existieren lediglich Approximationsalgorithmen und Heuristiken um einen Schedule zu berechnen. Eine Möglichkeit zur Berechnung eines Schedules sind layerbasierte Schedulingalgorithmen. Diese Schedulingalgorithmen bilden zuerst Layer unabhängiger paralleler Tasks und berechnen den Schedule für jeden Layer separat. Eine Schwachstelle dieser Schedulingalgorithmen ist das Zusammenfügen der einzelnen Schedules zum globalen Schedule. Der vorgestellte Algorithmus Move-blocks bietet eine elegante Möglichkeit das Zusammenfügen zu verbessern. Dies geschieht durch eine Verschmelzung der Schedules aufeinander folgender Layer. Obwohl eine Vielzahl an Schedulingalgorithmen für gemischt-parallele Anwendungen existiert, gibt es bislang keine umfassende Unterstützung des Schedulings durch Programmierwerkzeuge. Im Besonderen gibt es keine Schedulingumgebung, die eine Vielzahl an Schedulingalgorithmen in sich vereint. Die Vorstellung der flexiblen, komponentenbasierten und erweiterbaren Schedulingumgebung SEParAT ist der zweite Fokus dieser Dissertation. SEParAT unterstützt verschiedene Nutzungsszenarien, die weit über das reine Scheduling hinausgehen, z.B. den Vergleich von Schedulingalgorithmen und die Erweiterung und Realisierung neuer Schedulingalgorithmen. Neben der Vorstellung der Nutzungsszenarien werden sowohl die interne Verarbeitung eines Schedulingdurchgangs als auch die komponentenbasierte Softwarearchitektur detailliert vorgestellt.
47

Optimization of Section Points Locations in Electric Power Distribution Systems : Development of a Method for Improving the Reliability / Optimal placering av sektioneringspunkter : Utveckling av metod för att förbättra tillförlitligheten

Johansson, Joakim January 2015 (has links)
The power distribution system is the final link to transfer the electrical energy to the individual customers. It is distributed in a complex technical grid but is associated with the majority of all outages occurring. Improving its reliability is an efficient way to reduce the effects of outages. A common way of improving the reliability is by designing loop structures containing two connected feeders separated by a section point. The location of the section point will decide how the system structure is connected and its level of reliability. By finding the optimal location, an improved reliability may be accomplished. This Master’s thesis has developed a method of finding optimized section points locations in a primary distribution system in order to improve its reliability. A case study has been conducted in a part of Mälarenergi Elnät’s distribution system with the objective of developing an algorithm in MATLAB able to generate the optimal section points in the area. An analytical technique together with a method called Failure Modes and Effect Analysis (FMEA) as preparatory step, was used to simulate the impact of outages in various components based on historical data and literature reviews. Quantifying the impact was made by calculating the System Average Interruption Duration Index (SAIDI) and the Expected Cost (ECOST) which represented the reliability from a customer- and a socio-economic perspective. Using an optimization routine based on a Greedy algorithm an improvement of the reliability was made possible. The result of the case study showed a possible improvement of 28% on SAIDI and 41% on ECOST if optimizing the location of section points. It also indicated that loop structures containing mostly industry-, trade- and service-sectors may improve ECOST considerably by having a relocated section point. The analysis concluded that based on the considerable improvement the case study showed, a distribution system could be highly benefitted by optimizing the location of section points. The created algorithm may provide a helpful tool well representative for such a process in a cost-effective way. Applying it into a full size system was considered being possible but it would first require some additional improvements of reliability inputs and to resolve some fundamental issues like rated current in lines and geographical distances to substations.
48

Localização colaborativa em robótica de enxame. / Collaborative localization in swarm robotics.

Alan Oliveira de Sá 26 May 2015 (has links)
Fundação de Amparo à Pesquisa do Estado do Rio de Janeiro / Diversas das possíveis aplicações da robótica de enxame demandam que cada robô seja capaz de estimar a sua posição. A informação de localização dos robôs é necessária, por exemplo, para que cada elemento do enxame possa se posicionar dentro de uma formatura de robôs pré-definida. Da mesma forma, quando os robôs atuam como sensores móveis, a informação de posição é necessária para que seja possível identificar o local dos eventos medidos. Em virtude do tamanho, custo e energia dos dispositivos, bem como limitações impostas pelo ambiente de operação, a solução mais evidente, i.e. utilizar um Sistema de Posicionamento Global (GPS), torna-se muitas vezes inviável. O método proposto neste trabalho permite que as posições absolutas de um conjunto de nós desconhecidos sejam estimadas, com base nas coordenadas de um conjunto de nós de referência e nas medidas de distância tomadas entre os nós da rede. A solução é obtida por meio de uma estratégia de processamento distribuído, onde cada nó desconhecido estima sua própria posição e ajuda os seus vizinhos a calcular as suas respectivas coordenadas. A solução conta com um novo método denominado Multi-hop Collaborative Min-Max Localization (MCMM), ora proposto com o objetivo de melhorar a qualidade da posição inicial dos nós desconhecidos em caso de falhas durante o reconhecimento dos nós de referência. O refinamento das posições é feito com base nos algoritmos de busca por retrocesso (BSA) e de otimização por enxame de partículas (PSO), cujos desempenhos são comparados. Para compor a função objetivo, é introduzido um novo método para o cálculo do fator de confiança dos nós da rede, o Fator de Confiança pela Área Min-Max (MMA-CF), o qual é comparado com o Fator de Confiança por Saltos às Referências (HTA-CF), previamente existente. Com base no método de localização proposto, foram desenvolvidos quatro algoritmos, os quais são avaliados por meio de simulações realizadas no MATLABr e experimentos conduzidos em enxames de robôs do tipo Kilobot. O desempenho dos algoritmos é avaliado em problemas com diferentes topologias, quantidades de nós e proporção de nós de referência. O desempenho dos algoritmos é também comparado com o de outros algoritmos de localização, tendo apresentado resultados 40% a 51% melhores. Os resultados das simulações e dos experimentos demonstram a eficácia do método proposto. / Many applications of Swarm Robotic Systems (SRSs) require that a robot is able to discover its position. The location information of the robots is required, for example, to allow them to be correctly positioned within a predefined swarm formation. Similarly, when the robots act as mobile sensors, the position information is needed to allow the identification of the location of the measured events. Due to the size, cost and energy source restrictions of these devices, or even limitations imposed by the operating environment, the straightforward solution, i.e. the use of a Global Positioning System (GPS), is often not feasible. The method proposed in this work allows the estimation of the absolute positions of a set of unknown nodes, based on the coordinates of a set of reference nodes and the distances measured between nodes. The solution is achieved by means of a distributed processing strategy, where each unknown node estimates its own position and helps its neighbors to compute their respective coordinates. The solution makes use of a new method called Multi-hop Collaborative Min-Max Localization (MCMM), herein proposed, aiming to improve the quality of the initial positions estimated by the unknown nodes in case of failure during the recognition of the reference nodes. The positions refinement is achieved based on the Backtracking Search Optimization Algorithm (BSA) and the Particle Swarm Optimization (PSO), whose performances are compared. To compose the objective function, a new method to compute the confidence factor of the network nodes is introduced, the Min-max Area Confidence Factor (MMA-CF), which is compared with the existing Hops to Anchor Confidence Factor (HTA-CF). Based on the proposed localization method, four algorithms were developed and further evaluated through a set of simulations in MATLABr and experiments in swarms of type Kilobot robots. The performance of the algorithms is evaluated on problems with different topologies, quantities of nodes and proportion of reference nodes. The performance of the algorithms is also compared with the performance of other localization algorithms, showing improvements between 40% to 51%. The simulations and experiments outcomes demonstrate the effectiveness of the proposed method.
49

Localização colaborativa em robótica de enxame. / Collaborative localization in swarm robotics.

Alan Oliveira de Sá 26 May 2015 (has links)
Fundação de Amparo à Pesquisa do Estado do Rio de Janeiro / Diversas das possíveis aplicações da robótica de enxame demandam que cada robô seja capaz de estimar a sua posição. A informação de localização dos robôs é necessária, por exemplo, para que cada elemento do enxame possa se posicionar dentro de uma formatura de robôs pré-definida. Da mesma forma, quando os robôs atuam como sensores móveis, a informação de posição é necessária para que seja possível identificar o local dos eventos medidos. Em virtude do tamanho, custo e energia dos dispositivos, bem como limitações impostas pelo ambiente de operação, a solução mais evidente, i.e. utilizar um Sistema de Posicionamento Global (GPS), torna-se muitas vezes inviável. O método proposto neste trabalho permite que as posições absolutas de um conjunto de nós desconhecidos sejam estimadas, com base nas coordenadas de um conjunto de nós de referência e nas medidas de distância tomadas entre os nós da rede. A solução é obtida por meio de uma estratégia de processamento distribuído, onde cada nó desconhecido estima sua própria posição e ajuda os seus vizinhos a calcular as suas respectivas coordenadas. A solução conta com um novo método denominado Multi-hop Collaborative Min-Max Localization (MCMM), ora proposto com o objetivo de melhorar a qualidade da posição inicial dos nós desconhecidos em caso de falhas durante o reconhecimento dos nós de referência. O refinamento das posições é feito com base nos algoritmos de busca por retrocesso (BSA) e de otimização por enxame de partículas (PSO), cujos desempenhos são comparados. Para compor a função objetivo, é introduzido um novo método para o cálculo do fator de confiança dos nós da rede, o Fator de Confiança pela Área Min-Max (MMA-CF), o qual é comparado com o Fator de Confiança por Saltos às Referências (HTA-CF), previamente existente. Com base no método de localização proposto, foram desenvolvidos quatro algoritmos, os quais são avaliados por meio de simulações realizadas no MATLABr e experimentos conduzidos em enxames de robôs do tipo Kilobot. O desempenho dos algoritmos é avaliado em problemas com diferentes topologias, quantidades de nós e proporção de nós de referência. O desempenho dos algoritmos é também comparado com o de outros algoritmos de localização, tendo apresentado resultados 40% a 51% melhores. Os resultados das simulações e dos experimentos demonstram a eficácia do método proposto. / Many applications of Swarm Robotic Systems (SRSs) require that a robot is able to discover its position. The location information of the robots is required, for example, to allow them to be correctly positioned within a predefined swarm formation. Similarly, when the robots act as mobile sensors, the position information is needed to allow the identification of the location of the measured events. Due to the size, cost and energy source restrictions of these devices, or even limitations imposed by the operating environment, the straightforward solution, i.e. the use of a Global Positioning System (GPS), is often not feasible. The method proposed in this work allows the estimation of the absolute positions of a set of unknown nodes, based on the coordinates of a set of reference nodes and the distances measured between nodes. The solution is achieved by means of a distributed processing strategy, where each unknown node estimates its own position and helps its neighbors to compute their respective coordinates. The solution makes use of a new method called Multi-hop Collaborative Min-Max Localization (MCMM), herein proposed, aiming to improve the quality of the initial positions estimated by the unknown nodes in case of failure during the recognition of the reference nodes. The positions refinement is achieved based on the Backtracking Search Optimization Algorithm (BSA) and the Particle Swarm Optimization (PSO), whose performances are compared. To compose the objective function, a new method to compute the confidence factor of the network nodes is introduced, the Min-max Area Confidence Factor (MMA-CF), which is compared with the existing Hops to Anchor Confidence Factor (HTA-CF). Based on the proposed localization method, four algorithms were developed and further evaluated through a set of simulations in MATLABr and experiments in swarms of type Kilobot robots. The performance of the algorithms is evaluated on problems with different topologies, quantities of nodes and proportion of reference nodes. The performance of the algorithms is also compared with the performance of other localization algorithms, showing improvements between 40% to 51%. The simulations and experiments outcomes demonstrate the effectiveness of the proposed method.
50

Maximiza??o da penetra??o da gera??o distribu?da atrav?s do algoritmo de otimiza??o nuvem de part?culas

Pires, Bezaliel Albuquerque da Silva 03 August 2011 (has links)
Made available in DSpace on 2014-12-17T14:55:52Z (GMT). No. of bitstreams: 1 BezalielASP_DISSERT.pdf: 2307069 bytes, checksum: aa5ddc5e2ae2722d27d66e85a1e511f1 (MD5) Previous issue date: 2011-08-03 / This work develops a methodology for defining the maximum active power being injected into predefined nodes in the studied distribution networks, considering the possibility of multiple accesses of generating units. The definition of these maximum values is obtained from an optimization study, in which further losses should not exceed those of the base case, i.e., without the presence of distributed generation. The restrictions on the loading of the branches and voltages of the system are respected. To face the problem it is proposed an algorithm, which is based on the numerical method called particle swarm optimization, applied to the study of AC conventional load flow and optimal load flow for maximizing the penetration of distributed generation. Alternatively, the Newton-Raphson method was incorporated to resolution of the load flow. The computer program is performed with the SCILAB software. The proposed algorithm is tested with the data from the IEEE network with 14 nodes and from another network, this one from the Rio Grande do Norte State, at a high voltage (69 kV), with 25 nodes. The algorithm defines allowed values of nominal active power of distributed generation, in percentage terms relative to the demand of the network, from reference values / Neste trabalho, prop?e-se uma metodologia para defini??o dos valores m?ximos de pot?ncia ativa a serem injetados em barras pr?-definidas das redes de distribui??o estudadas, considerando a possibilidade de m?ltiplos acessos de unidades geradoras. A defini??o desses valores m?ximos se obt?m a partir de um estudo de otimiza??o, no qual as novas perdas n?o superam as do caso base, ou seja, sem a presen?a da gera??o distribu?da. No estudo atendem-se as restri??es de carregamentos nos ramos e tens?es do sistema. Para tratar o problema, prop?e-se um algoritmo baseado no m?todo num?rico de otimiza??o nuvem de part?culas, ou particle swarm optimization PSO, aplicado ao estudo de fluxo de carga convencional CA e ao fluxo de carga ?timo para maximiza??o da penetra??o da gera??o distribu?da. Tamb?m se incorporou o m?todo de Newton-Raphson, como alternativa, para a resolu??o do fluxo de carga. Realiza-se a programa??o computacional no software SCILAB. Testa-se o algoritmo proposto com os dados da rede IEEE-14 barras e de uma rede de distribui??o em alta tens?o (69 kV) do Estado do Rio Grande do Norte, com 25 barras. O algoritmo determina valores permitidos de pot?ncia ativa nominal de gera??o distribu?da, em termos percentuais relativos ? demanda da rede, a partir de valores de refer?ncia

Page generated in 0.1304 seconds