Spelling suggestions: "subject:"algorithm"" "subject:"allgorithm""
481 |
Sobre um método assemelhado ao de Francis para a determinação de autovalores de matrizesOliveira, Danilo Elias de [UNESP] 23 February 2006 (has links) (PDF)
Made available in DSpace on 2014-06-11T19:27:08Z (GMT). No. of bitstreams: 0
Previous issue date: 2006-02-23Bitstream added on 2014-06-13T19:26:10Z : No. of bitstreams: 1
oliveira_de_me_sjrp.pdf: 1040006 bytes, checksum: 88dd8fa849febafe8d0aa9bf32892235 (MD5) / O principal objetivo deste trabalho é apresentar, discutir as qualidades e desempenho e provar a convergência de um método iterativo para a solução numérica do problema de autovalores de uma matriz, que chamamos de Método Assemelhado ao de Francis (MAF). O método em questão distingue-se do QR de Francis pela maneira, mais simples e rápida, de se obter as matrizes ortogonais Qk, k = 1; 2. Apresentamos, também, uma comparação entre o MAF e os algoritmos QR de Francis e LR de Rutishauser. / The main purpose of this work is to presente, to discuss the qualities and performance and to prove the convergence of an iterative method for the numerical solution of the eigenvalue problem, that we have called the Método Assemelhado ao de Francis (MAF)þþ. This method di ers from the QR method of Francis by providing a simpler and faster technique of getting the unitary matrices Qk; k = 1; 2; We present, also, a comparison analises between the MAF and the QR of Francis and LR of Rutishauser algorithms.
|
482 |
Auxiliary variable Markov chain Monte Carlo methodsGraham, Matthew McKenzie January 2018 (has links)
Markov chain Monte Carlo (MCMC) methods are a widely applicable class of algorithms for estimating integrals in statistical inference problems. A common approach in MCMC methods is to introduce additional auxiliary variables into the Markov chain state and perform transitions in the joint space of target and auxiliary variables. In this thesis we consider novel methods for using auxiliary variables within MCMC methods to allow approximate inference in otherwise intractable models and to improve sampling performance in models exhibiting challenging properties such as multimodality. We first consider the pseudo-marginal framework. This extends the Metropolis–Hastings algorithm to cases where we only have access to an unbiased estimator of the density of target distribution. The resulting chains can sometimes show ‘sticking’ behaviour where long series of proposed updates are rejected. Further the algorithms can be difficult to tune and it is not immediately clear how to generalise the approach to alternative transition operators. We show that if the auxiliary variables used in the density estimator are included in the chain state it is possible to use new transition operators such as those based on slice-sampling algorithms within a pseudo-marginal setting. This auxiliary pseudo-marginal approach leads to easier to tune methods and is often able to improve sampling efficiency over existing approaches. As a second contribution we consider inference in probabilistic models defined via a generative process with the probability density of the outputs of this process only implicitly defined. The approximate Bayesian computation (ABC) framework allows inference in such models when conditioning on the values of observed model variables by making the approximation that generated observed variables are ‘close’ rather than exactly equal to observed data. Although making the inference problem more tractable, the approximation error introduced in ABC methods can be difficult to quantify and standard algorithms tend to perform poorly when conditioning on high dimensional observations. This often requires further approximation by reducing the observations to lower dimensional summary statistics. We show how including all of the random variables used in generating model outputs as auxiliary variables in a Markov chain state can allow the use of more efficient and robust MCMC methods such as slice sampling and Hamiltonian Monte Carlo (HMC) within an ABC framework. In some cases this can allow inference when conditioning on the full set of observed values when standard ABC methods require reduction to lower dimensional summaries for tractability. Further we introduce a novel constrained HMC method for performing inference in a restricted class of differentiable generative models which allows conditioning the generated observed variables to be arbitrarily close to observed data while maintaining computational tractability. As a final topicwe consider the use of an auxiliary temperature variable in MCMC methods to improve exploration of multimodal target densities and allow estimation of normalising constants. Existing approaches such as simulated tempering and annealed importance sampling use temperature variables which take on only a discrete set of values. The performance of these methods can be sensitive to the number and spacing of the temperature values used, and the discrete nature of the temperature variable prevents the use of gradient-based methods such as HMC to update the temperature alongside the target variables. We introduce new MCMC methods which instead use a continuous temperature variable. This both removes the need to tune the choice of discrete temperature values and allows the temperature variable to be updated jointly with the target variables within a HMC method.
|
483 |
Integrace data miningových nástrojů do prostředí MS Visual StudioDvořan, Jan January 2014 (has links)
This work contains the design solution and implementation of importing data mining tool Weka to the Visual Studio and Microsoft SQL Server 2012 by Managed Plug-in algorithm. In this thesis is describe, how is possible to create new Managed Plug-In algorithm and how is possible to import tool Weka into it. To use tool Weka in the new Managed Plugin Algorithm, is used the IKVM port. The IKVM can create from Weka tool new C# library, which can be used by Managed Plug-in Algorithm.
|
484 |
Incorporating domain expertise into evolutionary algorithm optimisation of water distribution systemsJohns, Matthew Barrie January 2016 (has links)
Evolutionary Algorithms (EAs) have been widely used for the optimisation of both theoretical and real-world non-linear problems, although such optimisation methods have found reasonably limited utilisation in fields outside of the academic domain. While the causality of this limited uptake in non-academic fields falls outside the scope of this thesis, the core focus of this research remains strongly influenced by the notions of solution feasibility and making optimisation methods more accessible for engineers, both factors attributed to low EA adoption rates in the commercial space. This thesis focuses on the application of bespoke heuristic methods to the field of water distribution system optimisation. Water distribution systems are complex entities that are difficult to model and optimise as they consist of many interacting components each with a set of considerations to address, hence it is important for the engineer to understand and assess the behaviour of the system to enable its effective design and optimisation. The primary goal of this research is to assess the impact that incorporating water systems knowledge into an evolution algorithm has on algorithm performance when applied to water distribution network optimisation problems. This thesis describes the development of two heuristics influenced by the practices of water systems engineers when designing water distribution networks with the view to increasing an algorithm’s performance and resultant solution feasibility. By utilising heuristics based on engineering design principles and integrating them into existing EAs, it is found that both engineering feasibility and general algorithmic performance can be notably improved. Firstly the heuristics are applied to a standard single-objective EA and then to a multi-objective genetic algorithm. The algorithms are assessed on a number of water distribution network benchmarks from the literature including real-world based, large scale systems and compared to the standard variants of the algorithms. Following this, a set of extensive experiments are conducted to explore how the inclusion of water systems knowledge impacts the sensitivity of an evolutionary algorithm to parameter variance. It was found that the performance of both engineering inspired algorithms were less sensitive to parameter change than the standard genetic algorithm variant meaning that non-experts in the field of meta-heuristics will potentially be able to get much better performance out of the engineering heuristic based algorithms without the need for specialist evolutionary algorithm knowledge. In addition this research explores the notion that visualisation techniques can provide water system engineers with a greater insight into the operation and behaviour of an evolutionary algorithm. The final section of this thesis presents a novel three-dimensional representation of pipe based water systems and demonstrates a range of innovative methods to convey information to the user. The interactive visualisation system presented not only allows the engineer to visualise the various parameters of a network but also allows the user to observe the behaviour and progress of an iterative optimisation method. Examples of the combination of the interactive visualisation system and the EAs developed in this work are shown to enable the user to track and visualise the actions of the algorithm. The visualisation aggregates changes to the network over an EA run and grants significant insight into the operations of an EA as it is optimising a network. The research presented in this thesis demonstrates the effectiveness of integrating water system engineering expertise into evolutionary based optimisation methods. Not only is solution quality improved over standard methods utilising these new heuristic techniques, but the potential for greater interaction between engineer, problem and optimiser has been established.
|
485 |
Algoritmo genético especializado na resolução de problemas com variáveis contínuas e altamente restritos /Zini, Érico de Oliveira Costa. January 2009 (has links)
Resumo: Este trabalho apresenta uma metodologia composta de duas fases para resolver problemas de otimização com restrições usando uma estratégia multiobjetivo. Na primeira fase, o esforço concentra-se em encontrar, pelo menos, uma solução factível, descartando completamente a função objetivo. Na segunda fase, aborda-se o problema como biobjetivo, onde se busca a otimização da função objetivo original e maximizar o cumprimento das restrições. Na fase um propõe-se uma estratégia baseada na diminuição progressiva da tolerância de aceitação das restrições complexas para encontrar soluções factíveis. O desempenho do algoritmo é validado através de 11 casos testes bastantes conhecidos na literatura especializada. / Abstract: This work presents a two-phase framework for solving constrained optimization problems using a multi-objective strategy. In the first phase, the objective function is completely disregarded and entire search effort is directed toward finding a single feasible solution. In the second phase, the problem is treated as a bi-objective optimization problem, where the technique converts constrained optimization to a two-objective optimization: one is the original objective function; the other is the degree function violating the constraints. In the first phase a methodology based on progressive decrease of the tolerance of acceptance of complex constrains is proposed in order to find feasible solutions. The approach is tested on 11 well-know benchmark functions. / Orientador: Rubén Augusto Romero Lázaro / Coorientador: José Roberto Sanches Mantovani / Banca: Antonio Padilha Feltrin / Banca: Marcos Julio Rider Flores / Mestre
|
486 |
A Novel Update to Dynamic Q Algorithm and a Frequency-fold Analysis for Aloha-based RFID Anti-Collision ProtocolsKhanna, Nikita 01 January 2015 (has links)
Radio frequency identification (RFID) systems are increasingly used for a wide range of applications from supply chain management to mobile payment systems. In a typical RFID system, there is a reader/interrogator and multiple tags/transponders, which can communicate with the reader. If more than one tag tries to communicate with the reader at the same time, a collision occurs resulting in failed communications, which becomes a significantly more important challenge as the number of tags in the environment increases. Collision reduction has been studied extensively in the literature with a variety of algorithm designs specifically tailored for low-power RFID systems.
In this study, we provide an extensive review of existing state-of-the-art time domain anti-collision protocols which can generally be divided into two main categories: 1) aloha based and 2) tree based. We explore the maximum theoretical gain in efficiency with a 2-fold frequency division in the ultra-high frequency (UHF) band of 902-928 MHz used for RFID systems in the United States. We analyze how such a modification would change the total number of collisions and improve efficiency for two different anti-collision algorithms in the literature: a relatively basic framed-slotted aloha and a more advanced reservation slot with multi-bits aloha. We also explore how a 2-fold frequency division can be implemented using analog filters for semi-passive RFID tags. Our results indicate significant gains in efficiency for both aloha algorithms especially for midsize populations of tags up to 50.
Finally, we propose two modifications to the Q-algorithm, which is currently used as part of the industry standard EPC Class 1 Generation 2 (Gen 2) protocol. The Q-Slot-Collision-Counter (QSCC) and Q-Frame-Collision-Counter (QFCC) algorithms change the size of the frame more dynamically depending on the number of colliding tags in each time slot with the help of radar cross section technique whereas the standard Q-algorithm uses a fixed parameter for frame adjustment. In fact, QFCC algorithm is completely independent of the variable "C" which is used in the standard protocol for modifying the frame size. Through computer simulations, we show that the QFCC algorithm is more robust and provide an average efficiency gain of more than 6% on large populations of tags compared to the existing standard.
|
487 |
Performance Analysis on Hybrid and ExactMethods for Solving Clustered VRP : A Comparative Study on VRP AlgorithmsTejaswi, Nunna January 2017 (has links)
Context: The Vehicle Routing Problem is an NP-hard problem with a combination of varieties oftopics like logistics, optimization research and data mining. There is a vast need of vehicle routingsolutions in day to day like with different constraints. According to the requirements, this problem hasbeen a field of interest to a lot of researchers who incorporate scientific methods to combine andinnovate new solutions to optimize the routing. Being an np-hard problem, it is almost impossible tocompute the solutions to optimality but years of research on this area has paid off quite significantlyand the solutions are optimized little by little and better than before. Some applications may or maynot find slight difference in the performance as a considerable affect but some applications orscenarios heavily depend on the performance of the solution where it is very vital that the solution isoptimized to the fullest. As a data mining technique clustering has been used very prominently in caseof portioning scenarios and similarly it has also began to surface in implementing VRP solutions.Although it has recently emerged into the Vehicle Routing era and shown some significant results, ithas not yet come into an open state or awareness. The awareness regarding clustering matters in ahuge extent to be considered by most of the recent researchers who formulate new algorithms to solveVRP and help them further optimize their solution. Objectives: In this study the significance of clustering has been considered to find out how the usageof clustering techniques can alter the performance of VRP based solutions favorably. Then to test theresults of two recently proposed cluster based algorithms, a comparison has been made to other typesof algorithms which prove how the algorithms stand with various methods. Methods: A literature review is performed using various articles that have been gathered from GoogleScholar and then an empirical experiment was conducted on the results available in the papers. Thisexperiment was done by performing a comparative analysis. Results: For the literature review the results were gathered from all the articles based on theirresearch, experience, use of clustering and how their result was improved by using clustering methodsin their formulations. Considering the experiment, the results of both the algorithm were comparedwith the results of five other papers who aim to solve the VRP using exactly the same instances thatwere used in the two algorithms in order to compare valid results on the same variables. Then theresults were analyzed for the purpose of comparison and conclusions were drawn accordingly. Conclusions: From the research performed in this paper we can conclude the vast significance ofclustering techniques that were drawn based on practical test results of various authors. From theexperiment performed it is clear that the Hybrid algorithm has a much higher performance than anyother algorithm it has been compared to. This algorithm has also been proven to enhance itsperformance due to the implementation of clustering techniques in their formulation. Since the resultswere only based on performance that is, in this case the total distance of the final route, future studyindicates the implementation of algorithms to compare them on basis of time complexity and spacecomplexity as well.
|
488 |
Application of Spectral Analysis to the Cycle Regression AlgorithmShah, Vivek 08 1900 (has links)
Many techniques have been developed to analyze time series. Spectral analysis and cycle regression analysis represent two such techniques. This study combines these two powerful tools to produce two new algorithms; the spectral algorithm and the one-pass algorithm. This research encompasses four objectives. The first objective is to link spectral analysis with cycle regression analysis to determine an initial estimate of the sinusoidal period. The second objective is to determine the best spectral window and truncation point combination to use with cycle regression for the initial estimate of the sinusoidal period. The third is to determine whether the new spectral algorithm performs better than the old T-value algorithm in estimating sinusoidal parameters. The fourth objective is to determine whether the one-pass algorithm can be used to estimate all significant harmonics simultaneously.
|
489 |
Generation of floating islands using height mapsSandberg, Roland January 2013 (has links)
A floating island was generated using two height maps, one for the bottom of the island and one for the top. Vertices of the island was displaced in the Y axis using an algorithm called Simplex noise. To avoid texture stretching that appears when displacing the vertices an algorithm called “tri planar projection” was implemented. After the vertices had been displaced the normal was smoothed using “normal smoothing”. An algorithm here called “the AVG algorithm” is created to smooth jig sawed edges around the island that appears after the island has been generated. The results of the AVG algorithm is judged by a group of 26 participants to see if any jig sawed edges are perceived. They answered a survey containing seven pictures, with each picture having one more iteration of the AVG algorithm then the last, starting from 0 iterations. Five iterations proved to be the best.
|
490 |
Localized Ant Colony of Robots for Redeployment in Wireless Sensor NetworksWang, Yuan January 2014 (has links)
Sensor failures or oversupply in wireless sensor networks (WSNs), especially initial
random deployment, create both spare sensors (whose area is fully covered by other
sensors) and sensing holes. We envision a team of robots to relocate sensors and
improve their area coverage. Existing algorithms, including centralized ones and the
only localized G-R3S2, move only spare sensors and have limited improvement because
non-spare sensors, with area coverage mostly overlapped by neighbour sensors,
are not moved, and additional sensors are deployed to fill existing holes. We propose
a localized algorithm, called Localized Ant-based Sensor Relocation Algorithm with
Greedy Walk (LASR-G), where each robot may carry at most one sensor and makes
decision that depends only on locally detected information. In LASR-G, each robot
calculates corresponding pickup or dropping probability, and relocates sensor with
currently low coverage contribution to another location where sensing hole would be
significantly reduced. The basic algorithm optimizes only area coverage, while modified algorithm includes also the cost of robot movement. We compare LASR-G with
G-R3S2, and examine both single robot and multi robots scenarios. The simulation
results show the advantages of LASR-G over G-R3S2.
|
Page generated in 0.0613 seconds