• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 125
  • 70
  • 23
  • 22
  • 15
  • 12
  • 5
  • 4
  • 3
  • 3
  • 2
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 374
  • 374
  • 315
  • 75
  • 70
  • 65
  • 64
  • 61
  • 50
  • 46
  • 44
  • 43
  • 42
  • 38
  • 37
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

Uma meta-heurística para uma classe de problemas de otimização de carteiras de investimentos

Silva, Yuri Laio Teixeira Veras 16 February 2017 (has links)
Submitted by Leonardo Cavalcante (leo.ocavalcante@gmail.com) on 2018-06-11T11:34:10Z No. of bitstreams: 1 Arquivototal.pdf: 1995596 bytes, checksum: bfcc1e1f3a77514dcbf7a8e4f5e4701b (MD5) / Made available in DSpace on 2018-06-11T11:34:10Z (GMT). No. of bitstreams: 1 Arquivototal.pdf: 1995596 bytes, checksum: bfcc1e1f3a77514dcbf7a8e4f5e4701b (MD5) Previous issue date: 2017-02-16 / Conselho Nacional de Pesquisa e Desenvolvimento Científico e Tecnológico - CNPq / The problem in investment portfolio selection consists in the allocation of resources to a finite number of assets, aiming, in its classic approach, to overcome a trade-off between the risk and expected return of the portfolio. This problem is one of the most important topics targeted at today’s financial and economic issues. Since the pioneering works of Markowitz, the issue is treated as an optimisation problem with the two aforementioned objectives. However, in recent years, various restrictions and additional risk measurements were identified in the literature, such as, for example, cardinality restrictions, minimum transaction lot and asset pre-selection. This practice aims to bring the issue closer to the reality encountered in financial markets. In that regard, this paper proposes a metaheuristic called Particle Swarm for the optimisation of several PSPs, in such a way that allows the resolution of the problem considering a set of restrictions chosen by the investor. / O problema de seleção de carteiras de investimentos (PSP) consiste na alocação de recursos a um número finito de ativos, objetivando, em sua abordagem clássica, superar um trade-off entre o retorno esperado e o risco da carteira. Tal problema ´e uma das temáticas mais importantes voltadas a questões financeiras e econômicas da atualidade. Desde os pioneiros trabalhos de Markowitz, o assunto é tratado como um problema de otimização com esses dois objetivos citados. Entretanto, nos últimos anos, diversas restrições e mensurações de riscos adicionais foram consideradas na literatura, como, por exemplo, restrições de cardinalidade, de lote mínimo de transação e de pré-seleção de ativos. Tal prática visa aproximar o problema da realidade encontrada nos mercados financeiros. Neste contexto, o presente trabalho propõe uma meta-heurística denominada Adaptive Non-dominated Sorting Multiobjective Particle Swarm Optimization para a otimização de vários problemas envolvendo PSP, de modo que permita a resolução do problema considerando um conjunto de restri¸c˜oes escolhidas pelo investidor.
42

Algoritmo híbrido para avaliação da integridade estrutural: uma abordagem heurística / Hybrid algorithm for damage detection: a heuristic approach

Oscar Javier Begambre Carrillo 25 June 2007 (has links)
Neste estudo, o novo algoritmo hibrido autoconfigurado PSOS (Particle Swarm Optimization - Simplex) para avaliação da integridade estrutural a partir de respostas dinâmicas é apresentado. A formulação da função objetivo para o problema de minimização definido emprega funções de resposta em freqüência e/ou dados modais do sistema. Uma nova estratégia para o controle dos parâmetros do algoritmo Particle Swarm Optimization (PSO), baseada no uso do método de Nelder - Mead é desenvolvida; conseqüentemente, a convergência do PSO fica independente dos parâmetros heurísticos e sua estabilidade e precisão são melhoradas. O método híbrido proposto teve melhor desempenho, nas diversas funções teste analisadas, quando comparado com os algoritmos simulated annealing, algoritmos genéticos e o PSO. São apresentados diversos problemas de detecção de dano, levando em conta os efeitos do ruído e da falta de dados experimentais. Em todos os casos, a posição e extensão do dano foram determinadas com sucesso. Finalmente, usando o PSOS, os parâmetros de um oscilador não linear (oscilador de Duffing) foram identificados. / In this study, a new auto configured Particle Swarm Optimization - Simplex algorithm for damage detection has been proposed. The formulation of the objective function for the minimization problem is based on the frequency response functions (FRFs) and the modal parameters of the system. A novel strategy for the control of the Particle Swarm Optimization (PSO) parameters based on the Nelder-Mead algorithm (Simplex method) is presented; consequently, the convergence of the PSOS becomes independent of the heuristic constants and its stability and accuracy are enhanced. The formulated hybrid method performs better in different benchmark functions than the Simulated Annealing (SA), the Genetic Algorithm (GA) and the basic PSO. Several damage identification problems, taking into consideration the effects of noisy and incomplete data, were studied. In these cases, the damage location and extent were determined successfully. Finally, using the PSOS, a non-linear oscillator (Duffing oscillator) was identified with good results.
43

A novel q-exponential based stress-strength reliability model and applications to fatigue life with extreme values

SALES FILHO, Romero Luiz Mendonça 24 February 2016 (has links)
Submitted by Fabio Sobreira Campos da Costa (fabio.sobreira@ufpe.br) on 2016-08-05T14:42:09Z No. of bitstreams: 2 license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5) TESE PPGEP (Romero Luiz M. Sales Filho).pdf: 3453451 bytes, checksum: be76714c0d9a1e907faa85d15041f6ca (MD5) / Made available in DSpace on 2016-08-05T14:42:09Z (GMT). No. of bitstreams: 2 license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5) TESE PPGEP (Romero Luiz M. Sales Filho).pdf: 3453451 bytes, checksum: be76714c0d9a1e907faa85d15041f6ca (MD5) Previous issue date: 2016-02-24 / CAPEs / In recent years, a family of probability distributions based on Nonextensive Statistical Mechanics, known as q-distributions, has experienced a surge in terms of applications to several fields of science and engineering. In this work the _-Exponential distribution will be studied in detail. One of the features of this distribution is the capability of modeling data that have a power law behavior, since it has a heavy-tailed probability density function (PDF) for particular values of its parameters. This feature allows us to consider this distribution as a candidate to model data sets with extremely large values (e.g. cycles to failure). Once the analytical expressions for the maximum likelihood estimates (MLE) of _-Exponential are very difficult to be obtained, in this work, we will obtain the MLE for the parameters of the _- Exponential using two different optimization methods: particle swarm optimization (PSO) and Nelder-Mead (NM), which are also coupled with parametric and non-parametric bootstrap methods in order to obtain confidence intervals for these parameters; asymptotic intervals are also derived. Besides, we will make inference about a useful performance metric in system reliability, the called index __(_, where the stress _ and strength are independent q-Exponential random variables with different parameters. In fact, when dealing with practical problems of stress-strength reliability, one can work with fatigue life data and make use of the well-known relation between stress and cycles until failure. For some materials, this kind of data can involve extremely large values and the capability of the q- Exponential distribution to model data with extremely large values makes this distribution a good candidate to adjust stress-strength models. In terms of system reliability, the index _ is considered a topic of great interest, so we will develop the maximum likelihood estimator (MLE) for the index _ and show that this estimator is obtained by a function that depends on the parameters of the distributions for and _. The behavior of the MLE for the index _ is assessed by means of simulated experiments. Moreover, confidence intervals are developed based on parametric and non-parametric bootstrap. As an example of application, we consider two experimental data sets taken from literature: the first is related to the analysis of high cycle fatigue properties of ductile cast iron for wind turbine components, and the second one evaluates the specimen size effects on gigacycle fatigue properties of high-strength steel. / Nos últimos anos, tem sido notado em diversas áreas da ciência e engenharia, um aumento significativo na aplicabilidade da família q de distribuições de probabilidade que se baseia em Mecânica Estatística Não Extensiva. Uma das características da distribuição q-Exponencial é a capacidade de modelar dados que apresentam comportamento de lei de potência, uma vez que tal distribuição possui uma função densidade de probabilidade (FDP) que apresenta cauda pesada para determinados valores de parâmetros. Esta característica permite-nos considerar tal distribuição como candidata para modelar conjuntos de dados que apresentam valores extremamente grandes (Ex.: ciclos até a falha). Uma vez que expressões analíticas para os estimadores de máxima verossimilhança dos parâmetros não são facilmente encontradas, neste trabalho, iremos obter as estimativas de máxima verossimilhança dos parâmetros através de dois métodos de otimização: particle swarm optimization (PSO) e Nelder-Mead (NM), que além das estimativas pontuais, irão nos fornecer juntamente com abordagens bootstrap, intervalos de confiança para os parâmetros da distribuição; intervalos assintóticos também serão derivados. Além disso, faremos inferência sobre um importante índice de confiabilidade, o chamado Índice __(_, onde Y (estresse) e X (força) são variáveis aleatórias independentes. De fato, quando tratamos de problemas práticos de força-estresse, podemos trabalhar com dados de fadiga e fazer uso da bem conhecida relação entre estresse e ciclos até a falha. Para alguns materiais, esse tipo de variável pode apresentar dados com valores muito grandes e a capacidade da q-Exponencial em modelar esse tipo de dado torna essa uma distribuição a ser considerada para ajustar modelos de força-estresse. Em termos de confiabilidade de sistemas, o índice R é considerado um tópico de bastante interesse, assim iremos desenvolver os estimadores de máxima verossimilhança para esse índice e mostrar que esse estimador é obtido através de uma função que depende dos parâmetros da distribuição de X e Y. O comportamento do estimador é investigado através de experimentos simulados. Intervalos de confiança são desenvolvidos através de bootstrap paramétrico e nãoparamétrico. Duas aplicações envolvendo dados de ciclos até a falha e retiradas da literatura são consideradas: a primeira para ferro fundido e a segunda para aço de alta resistência.
44

Solving dynamic multi-objective optimisation problems using vector evaluated particle swarm optimisation

Helbig, Marde 24 September 2012 (has links)
Most optimisation problems in everyday life are not static in nature, have multiple objectives and at least two of the objectives are in conflict with one another. However, most research focusses on either static multi-objective optimisation (MOO) or dynamic singleobjective optimisation (DSOO). Furthermore, most research on dynamic multi-objective optimisation (DMOO) focusses on evolutionary algorithms (EAs) and only a few particle swarm optimisation (PSO) algorithms exist. This thesis proposes a multi-swarm PSO algorithm, dynamic Vector Evaluated Particle Swarm Optimisation (DVEPSO), to solve dynamic multi-objective optimisation problems (DMOOPs). In order to determine whether an algorithm solves DMOO efficiently, functions are required that resembles real world DMOOPs, called benchmark functions, as well as functions that quantify the performance of the algorithm, called performance measures. However, one major problem in the field of DMOO is a lack of standard benchmark functions and performance measures. To address this problem, an overview is provided from the current literature and shortcomings of current DMOO benchmark functions and performance measures are discussed. In addition, new DMOOPs are introduced to address the identified shortcomings of current benchmark functions. Guides guide the optimisation process of DVEPSO. Therefore, various guide update approaches are investigated. Furthermore, a sensitivity analysis of DVEPSO is conducted to determine the influence of various parameters on the performance of DVEPSO. The investigated parameters include approaches to manage boundary constraint violations, approaches to share knowledge between the sub-swarms and responses to changes in the environment that are applied to either the particles of the sub-swarms or the non-dominated solutions stored in the archive. From these experiments the best DVEPSO configuration is determined and compared against four state-of-the-art DMOO algorithms. / Thesis (PhD)--University of Pretoria, 2012. / Computer Science / unrestricted
45

Parameter estimation in a cardiovascular computational model using numerical optimization : Patient simulation, searching for a digital twin

Tuccio, Giulia January 2022 (has links)
Developing models of the cardiovascular system that simulates the dynamic behavior of a virtual patient’s condition is fundamental in the medical domain for predictive outcome and hypothesis generation. These models are usually described through Ordinary Differential Equation (ODE). To obtain a patient-specific representative model, it is crucial to have an accurate and rapid estimate of the hemodynamic model parameters. Moreover, when adequate model parameters are found, the resulting time series of state variables can be clinically used for predicting the response to treatments and for non-invasive monitoring. In the Thesis, we address the parameter estimation or inverse modeling, by solving an optimization problem, which aims at minimizing the error between the model output and the target data. In our case, the target data are a set of user-defined state variables, descriptive of a hospitalized specific patient and obtained from time-averaged state variables. The Thesis proposes a comparison of both state-of-the-art and novel methods for the estimation of the underlying model parameters of a cardiovascular simulator Aplysia. All the proposed algorithms are selected and implemented considering the constraints deriving from the interaction with Aplysia. In particular, given the inaccessibility of the ODE, we selected gradient-free methods, which do not need to estimate numerically the derivatives. Furthermore, we aim at having a small number of iterations and objective function calls, since these importantly impact the speed of the estimation procedure, and thus the applicability of the knowledge gained through the parameters at the bedside. Moreover, the Thesis addresses the most common problems encountered in the inverse modeling, among which are the non-convexity of the objective function and the identifiability problem. To assist in resolving the latter issue an identifiability analysis is proposed, after which the unidentifiable parameters are excluded. The selected methods are validated using heart failure data, representative of different pathologies commonly encountered in Intensive Care Unit (ICU) patients. The results show that the gradient-free global algorithms Enhanced Scatter Search and Particle Swarm estimate the parameters accurately at the price of a high number of function evaluations and CPU time. As such, they are not suitable for bedside applications. Besides, the local algorithms are not suitable to find an accurate solution given their dependency on the initial guess. To solve this problem, we propose two methods: the hybrid, and the prior-knowledge algorithms. These methods, by including prior domain knowledge, can find a good solution, escaping the basin of attraction of local minima and producing clinically significant parameters in a few minutes. / Utveckling av modeller av det kardiovaskulära systemet som simulerar det dynamiska beteendet hos en virtuell patients är grundläggande inom det medicinska området för att kunna förutsäga resultat och generera hypoteser. Dessa modeller beskrivs vanligtvis genom Ordinary Differential Equation (ODE). För att erhålla en patientspecifik representativ modell är det viktigt att ha en exakt och snabb uppskattning av de hemodynamiska modellparametrarna. När adekvata modellparametrar har hittats kan de resulterande tidsserierna av tillståndsvariabler dessutom användas kliniskt för att förutsäga svaret på behandlingar och för icke-invasiv övervakning. I avhandlingen behandlar vi parameteruppskattning eller invers modellering genom att lösa ett optimeringsproblem som syftar till att minimera följande felet mellan modellens utdata och måldata. I vårt fall är måldata en uppsättning användardefinierade tillståndsvariabler som beskriver en specifik patient som är inlagd på sjukhus och som erhålls från tidsgenomsnittliga tillståndsvariabler. I avhandlingen föreslås en jämförelse av befintlinga och nya metoder. för uppskattning av de underliggande modellparametrarna i en kardiovaskulär simulator, Aplysia. Alla föreslagna algoritmer är valts och implementerade med hänsyn tagna till de begränsningar som finnis i simulatorn Aplysia. Med tanke på att ODE är otillgänglig har vi valt gradientfria metoder som inte behöver uppskatta derivatorna numeriskt. Dessutom strävar vi efter att ha få interationer och funktionsanrop eftersom dessa påverkar hastigheten på estimeringen och därmed den kliniska användbartheten vid patientbehandling. Avhandlingen behandlas dessutom de vanligaste problemen vid inversmodellering som icke-konvexitet och identifierbarhetsproblem. För att lösa det sistnämnda problemet föreslås en identifierbarhetsanalys varefter de icke-identifierbara parametrarna utesluts. De valda metoderna valideras med hjälp av data om hjärtsvikt som är representativa för olika patologier som ofta förekommer hos Intensive Care Unit (ICU)-patienter. Resultaten visar att de gradientfria globala algoritmerna Enhanced Scatter Search och Particle Swarm uppskattar parametrarna korrekt till priset av ett stort antal funktionsutvärderingar och processortid. De är därför inte lämpliga för tillämpningar vid sängkanten. Dessutom är de lokala algoritmerna inte lämpliga för att hitta en exakt lösning eftersom de är beroende av den ursprungliga gissningen. För att lösa detta problem föreslår vi två metoder: hybridalgoritmer och algoritmer med förhandsinformation. Genom att inkludera tidigare domänkunskap kan dessa metoder hitta en bra lösning som undviker de lokala minimernas attraktionsområde och producerar kliniskt betydelsefulla parametrar på några minuter.
46

Statische und dynamische Hysteresemodelle für die Auslegung und Simulation von elektromagnetischen Aktoren

Shmachkov, Mikhail, Neumann, Holger, Rottenbach, Torsten, Worlitz, Frank 13 December 2023 (has links)
Beim Designprozess elektromagnetischer Aktoren ist die zuverlässige Bestimmung der zu erwartenden Verluste von großer Bedeutung. Während ohmsche Verluste sehr einfach bestimmt werden können, stellen Eisen-/Hystereseverluste häufig einen Unsicherheitsfaktor dar. Hier sind Herstellerangaben meist nur für einige wenige Arbeitspunkte bei harmonischem Betrieb vorhanden. Für den Einsatz in numerischen Berechnungen bei der Auslegung und Simulation solcher Aktoren ist eine detaillierte Beschreibung der ferromagnetischen Hysterese notwendig. Zu diesem Zweck werden häufig das Jiles-Atherton-Hysteresemodell und dessen Weiterentwicklungen eingesetzt. Aufgrund der Vielzahl an verfügbaren modifizierten Varianten wurde im Rahmen dieses Beitrages zunächst untersucht, welche Modellversionen zueinander kompatibel sind. So wird die Verwendung statischer und dynamischer Hysteresemodelle sowie die jeweilig dazu passende inverse Modellform bei konsistenter Parametrierung ermöglicht. Weiterhin wird die Parameteridentifikation anhand experimentell ermittelter Hysteresekurven für verschiedene Werkstoffe mit Hilfe der Particle-Swarm-Optimization vorgestellt. / The reliable determination of the expected losses is important for the design process of electromagnetic actuators. While resistive losses can be determined very easily, iron/hysteresis losses often represent an uncertainty factor. Manufacturer’s specifications are usually only available for a few operating points with harmonic excitation. A detailed description of the ferromagnetic hysteresis is necessary for the use in numerical calculations in the design and simulation of such actuators. For this purpose, the Jiles-Atherton hysteresis model and its further developments are often used. Due to the large number of available modified variants, at first an examination on which model versions are compatible with each other has been performed. This allows the use of static and dynamic hysteresis models as well as the corresponding inverse model form with consistent parameterization. Furthermore, the parameter identification based on experimentally determined hysteresis curves for different materials is presented using particle swarm optimization.
47

Task Scheduling Using Discrete Particle Swarm Optimisation / Schemaläggning genom diskret Particle Swarm Optimisation

Karlberg, Hampus January 2020 (has links)
Optimising task allocation in networked systems helps in utilising available resources. When working with unstable and heterogeneous networks, task scheduling can be used to optimise task completion time, energy efficiency and system reliability. The dynamic nature of networks also means that the optimal schedule is subject to change over time. The heterogeneity and variability in network design also complicate the translation of setups from one network to another. Discrete Particle Swarm Optimisation (DPSO) is a metaheuristic that can be used to find solutions to task scheduling. This thesis will explore how DPSO can be used to optimise job scheduling in an unstable network. The purpose is to find solutions for networks like the ones used on trains. This in turn is done to facilitate trajectory planning calculations. Through the use of an artificial neural network, we estimate job scheduling costs. These costs are then used by our DPSO meta heuristic to explore a solution space of potential scheduling. The results focus on the optimisation of batch sizes in relation to network reliability and latency. We simulate a series of unstable and heterogeneous networks and compare completion time. The baseline comparison is the case where scheduling is done by evenly distributing jobs at fixed sizes. The performance of the different approaches is then analysed with regards to usability in real-life scenarios on vehicles. Our results show a noticeable increase in performance within a wide range of network set-ups. This is at the cost of long search times for the DPSO algorithm. We conclude that under the right circumstances, the method can be used to significantly speed up distributed calculations at the cost of requiring significant ahead of time calculations. We recommend future explorations into DPSO starting states to speed up convergence as well as benchmarks of real-life performance. / Optimering av arbetsfördelning i nätverk kan öka användandet av tillgängliga resurser. I instabila heterogena nätverk kan schemaläggning användas för att optimera beräkningstid, energieffektivitet och systemstabilitet. Då nätverk består av sammankopplade resurser innebär det också att vad som är ett optimalt schema kan komma att ändras över tid. Bredden av nätverkskonfigurationer gör också att det kan vara svårt att överföra och applicera ett schema från en konfiguration till en annan. Diskret Particle Swarm Optimisation (DPSO) är en meta heuristisk metod som kan användas för att ta fram lösningar till schemaläggningsproblem. Den här uppsatsen kommer utforska hur DPSO kan användas för att optimera schemaläggning för instabila nätverk. Syftet är att hitta en lösning för nätverk under liknande begränsningar som de som återfinns på tåg. Detta för att i sin tur facilitera planerandet av optimala banor. Genom användandet av ett artificiellt neuralt nätverk (ANN) uppskattar vi schemaläggningskostnaden. Denna kostnad används sedan av DPSO heuristiken för att utforska en lösningsrymd med potentiella scheman. Våra resultat fokuserar på optimeringen av grupperingsstorleken av distribuerade problem i relation till robusthet och letens. Vi simulerar ett flertal instabila och heterogena nätverk och jämför deras prestanda. Utgångspunkten för jämförelsen är schemaläggning där uppgifter distribueras jämnt i bestämda gruperingsstorlekar. Prestandan analyseras sedan i relation till användbarheten i verkliga scenarion. Våra resultat visar på en signifikant ökning i prestanda inom ett brett spann av nätverkskonfigurationer. Det här är på bekostnad av långa söktider för DPSO algoritmen. Vår slutsats är att under rätt förutsättningar kan metoden användas för att snabba upp distribuerade beräkningar förutsatt att beräkningarna för schemaläggningen görs i förväg. Vi rekommenderar vidare utforskande av DPSO algoritmens parametrar för att snabba upp konvergens, samt undersökande av algoritmens prestanda i verkliga miljöer.
48

Wind Turbine Airfoil Optimization by Particle Swarm Method

Endo, Makoto January 2011 (has links)
No description available.
49

Equivalent Models for Hydropower Operation in Sweden

Prianto, Pandu Nugroho January 2021 (has links)
Hydropower systems often contain complex river systems which cause the simulations and analyses of a hydropower operation to be computationally heavy. The complex river system is referred to as something called a Detailed model. By creating a simpler model, denoted the Equivalent model, the computational issue could be circumvented. The purpose of this Equivalent model is to emulate the results of the Detailed model. This thesis computes the Equivalent model for a large hydropower system using Particle Swarm Optimisation- algorithm, then evaluates the Equivalent model performance. Simulations are performed on ten rivers in Sweden, representing four trading areas for one year, October 2017 – September 2018. Furthermore, the year is divided into Quarterly and Seasonal periods, to investigate whether the Equivalent model changes over time. The Equivalent model performance is evaluated based on the relative power difference and computational time compared to the Detailed model. The relative power difference is 4%23% between Equivalent and Detailed models, depending on the period and trading area, with the computational time can be reduced by more than 90%. Furthermore, the Equivalent model changes over time, suggesting that when the year is divided appropriately, the Equivalent model could perform better. The relative power difference results indicate that the Equivalent model performance can still be improved by dividing the periods more appropriately, other than Quarterly or Seasonal. Nevertheless, the results provide a satisfactory Equivalent model, based on the faster computation time and a reasonable relative power difference. Finally, the Equivalent model could be used as a foundation for further analyses and simulations. / Vattenkraftsystem består ofta av komplexa älvsystem som gör att simuleringar och analyser av vattenkraftens operation blir beräkningsmässigt tunga. Det komplexa älvsystem kallas en Detaljeraded modell. Genom att skapa en enklare modell, betecknas som en Ekvivalent modell, beräkningsproblemen kan kringgås. Syftet med denna Ekvivalenta modell är att emulera resultaten av den komplexa Detaljerade modellen. Detta examensarbete beräknar den Ekvivalenta modellen för ett stort vattenkraftssystem med hjälp av Particle Swarm Optimisation- algorithmen, och utvärderar modellprestandan hos Ekvivalenten. Simuleringar utförs på tio älvar i Sverige, som representerar fyra handelsområden under ett år, från oktober 2017 september 2018. Dessutom är året uppdelat i kvartals- och säsongsperioder för att undersöka om den Ekvivalenta modellen förändras över tid. Denna Ekvivalenta modell utvärderas baserat på den relativa effektskillnaden och beräkningstiden jämfört med den Detaljerade modellen. Den relativa effektskillnaden är 4% 23% mellan de Ekvivalenta och Detaljerade modellerna, beroende på period och handelsområde, och beräkningstiden minskas med mer än 90%. Vidare ändras Ekvivalenta modellen över tiden, vilket tyder på att när året delas upp på rätt sätt kan den Ekvivalenta modellen prestera ännu bättre. De relativa effektskillnaderna indikerar att vissa perioder fortfarande kan förbättras genom att dela upp perioden mer korrekt. Trots allt, förser resultanten en tillfredsställande Ekvivalent modell som har en mer effektiv beräkningstid och rimliga effektskillnader. Slutligen skulle den Ekvivalenta modellen kunna användas som en grund för ytterligare analyser och simuleringar.
50

ENAMS : energy optimization algorithm for mobile wireless sensor networks using evolutionary computation and swarm intelligence

Al-Obaidi, Mohanad January 2010 (has links)
Although traditionally Wireless Sensor Network (WSNs) have been regarded as static sensor arrays used mainly for environmental monitoring, recently, its applications have undergone a paradigm shift from static to more dynamic environments, where nodes are attached to moving objects, people or animals. Applications that use WSNs in motion are broad, ranging from transport and logistics to animal monitoring, health care and military. These application domains have a number of characteristics that challenge the algorithmic design of WSNs. Firstly, mobility has a negative effect on the quality of the wireless communication and the performance of networking protocols. Nevertheless, it has been shown that mobility can enhance the functionality of the network by exploiting the movement patterns of mobile objects. Secondly, the heterogeneity of devices in a WSN has to be taken into account for increasing the network performance and lifetime. Thirdly, the WSN services should ideally assist the user in an unobtrusive and transparent way. Fourthly, energy-efficiency and scalability are of primary importance to prevent the network performance degradation. This thesis contributes toward the design of a new hybrid optimization algorithm; ENAMS (Energy optimizatioN Algorithm for Mobile Sensor networks) which is based on the Evolutionary Computation and Swarm Intelligence to increase the life time of mobile wireless sensor networks. The presented algorithm is suitable for large scale mobile sensor networks and provides a robust and energy- efficient communication mechanism by dividing the sensor-nodes into clusters, where the number of clusters is not predefined and the sensors within each cluster are not necessary to be distributed in the same density. The presented algorithm enables the sensor nodes to move as swarms within the search space while keeping optimum distances between the sensors. To verify the objectives of the proposed algorithm, the LEGO-NXT MIND-STORMS robots are used to act as particles in a moving swarm keeping the optimum distances while tracking each other within the permitted distance range in the search space.

Page generated in 0.0314 seconds