601 |
An evolutionary algorithm for the constrained forest problemQueern, John John 01 January 2013 (has links)
Given an undirected edge-weighted graph G and a positive integer m, the Constrained Forest Problem (CFP) seeks a lowest-cost (or minimum-weight) forest F which spans G while satisfying the requirement that each tree in F contain at least m vertices. This problem has been shown to be NP-hard for values of m greater than three, giving rise to a number of approximation strategies for finding reasonable m-forest solutions. This research presents a new genetic algorithm (GA) which can consistently find equal-or-better solutions to the problem when compared to non-genetic alternatives. This GA is unique in that it uses chromosomes which are actual candidate solutions (m-forests) and performs genetic operations (random creation, selection, recombination, and mutation) on these candidate solutions. Experiments were run using 180 different GA configurations on 50 benchmark graphs to determine which operators and techniques would be most successful in solving the m-forest problem. The "heaviest edge first" or HEF algorithm run against the minimum spanning tree (MST) of a graph was used as a performance metric. Previously, the HEF(MST) algorithm had been shown to produce the best results on m-forest problems. When the GA was able to find better results than HEF(MST) on the same problem instance, this was considered a GA success. Since the GA's initial population included heuristic candidate solutions such as HEF(MST), the GA never did worse than the best of these. GA solution quality did vary, however, often finding several different better-than-HEF(MST) solutions, illustrating the stochastic nature of the process. Based on data collected from the 9000 initial problem instances, several factors were shown to significantly improve the quality of the GA solution. Problem instances which did not include mutation had a much lower success rate than those which did. Adding calculated heuristic solutions such as HEF(MST) to the initial population allowed the GA to converge more quickly and improved its likelihood of finding better-than-HEF(MST) solutions. Building an initial population using randomly-generated candidate solutions whose edges were restricted to the problem graph's MST proved equally successful. GA configuration options were analyzed using all 9000 test cases and again using only those 403 cases in which the GA was able to find the very best solution for each graph. These analyses were consistent, and resulted in the identification of a single "best" GA configuration which combined the best overall initial population strategy, random seeding algorithms, mutation and crossover strategy. The selected configuration was then further tested using various values of m to ensure that the resulting GA could in fact find better-than-HEF(MST) solutions for the majority of problem instances.
|
602 |
Approximate replication of high-breakdown robust regression techniquesZeileis, Achim, Kleiber, Christian January 2008 (has links) (PDF)
This paper demonstrates that even regression results obtained by techniques close to the standard ordinary least squares (OLS) method can be difficult to replicate if a stochastic model fitting algorithm is employed. / Series: Research Report Series / Department of Statistics and Mathematics
|
603 |
Algoritmen som kulturellt redskap : Fyra elevers förståelse av additionsalgoritmenBartfai, Sara January 2016 (has links)
The aim of this investigation has been to examine four students, in a second grade class in Stockholm, understanding of the addition algorithm. A small field study has been carried out including both interviews and classroom studies. Vygotsky’s socio-cultural theory and more specifically the concepts of mediation and cultural tools have been applied. Vygotsky asserts that our contact with the world is mediated by cultural tools. The addition algorithm is in this thesis seen as a cultural tool that the students are to appropriate. The results show a variation of the student’s understanding of the addition algorithm. Most importantly it shows that it is possible for students to “say more than they know” with the use of the algorithm. It is difficult to see how much a student really understand of a mathematical concept and easier to see if they do not understand it or are using it in an inappropriate way. Therefore it is necessary for teachers to form a dialogue with the students and ask them why they do as they do while using different mathematical concepts, such as addition algorithms, to acquire a perception of their mathematical understanding.
|
604 |
Algorithmic correspondence and completeness in modal logicConradie, Willem Ernst 06 March 2008 (has links)
Abstract
This thesis takes an algorithmic perspective on the correspondence between modal and hybrid
logics on the one hand, and first-order logic on the other. The canonicity of formulae, and by
implication the completeness of logics, is simultaneously treated.
Modal formulae define second-order conditions on frames which, in some cases, are equiv-
alently reducible to first-order conditions. Modal formulae for which the latter is possible
are called elementary. As is well known, it is algorithmically undecidable whether a given
modal formula defines a first-order frame condition or not. Hence, any attempt at delineating
the class of elementary modal formulae by means of a decidable criterium can only consti-
tute an approximation of this class. Syntactically specified such approximations include the
classes of Sahlqvist and inductive formulae. The approximations we consider take the form
of algorithms.
We develop an algorithm called SQEMA, which computes first-order frame equivalents for
modal formulae, by first transforming them into pure formulae in a reversive hybrid language.
It is shown that this algorithm subsumes the classes of Sahlqvist and inductive formulae, and
that all formulae on which it succeeds are d-persistent (canonical), and hence axiomatize
complete normal modal logics.
SQEMA is extended to polyadic languages, and it is shown that this extension succeeds
on all polyadic inductive formulae. The canonicity result is also transferred.
SQEMA is next extended to hybrid languages. Persistence results with respect to discrete
general frames are obtained for certain of these extensions. The notion of persistence with
respect to strongly descriptive general frames is investigated, and some syntactic sufficient
conditions for such persistence are obtained. SQEMA is adapted to guarantee the persistence
with respect to strongly descriptive frames of the hybrid formulae on which it succeeds, and
hence the completeness of the hybrid logics axiomatized with these formulae. New syntactic
classes of elementary and canonical hybrid formulae are obtained.
Semantic extensions of SQEMA are obtained by replacing the syntactic criterium of nega-
tive/positive polarity, used to determine the applicability of a certain transformation rule, by
its semantic correlate—monotonicity. In order to guarantee the canonicity of the formulae on
which the thus extended algorithm succeeds, syntactically correct equivalents for monotone
formulae are needed. Different version of Lyndon’s monotonicity theorem, which guarantee
the existence of these equivalents, are proved. Constructive versions of these theorems are
also obtained by means of techniques based on bisimulation quantifiers.
Via the standard second-order translation, the modal elementarity problem can be at-
tacked with any second-order quantifier elimination algorithm. Our treatment of this ap-
proach takes the form of a study of the DLS-algorithm. We partially characterize the for-
mulae on which DLS succeeds in terms of syntactic criteria. It is shown that DLS succeeds
in reducing all Sahlqvist and inductive formulae, and that all modal formulae in a single
propositional variable on which it succeeds are canonical.
|
605 |
Advances in genetic algorithm optimization of traffic signalsKesur, Khewal Bhupendra 29 May 2008 (has links)
Recent advances in the optimization of fixed time traffic signals have demonstrated a move
towards the use of genetic algorithm optimization with traffic network performance evaluated via
stochastic microscopic simulation models. This dissertation examines methods for improved
optimization. Several modified versions of the genetic algorithm and alternative genetic
operators were evaluated on test networks. A traffic simulation model was developed for
assessment purposes. Application of the CHC search algorithm with real crossover and mutation
operators were found to offer improved optimization efficiency over the standard genetic
algorithm with binary genetic operators. Computing resources are best utilized by using a single
replication of the traffic simulation model with common random numbers for fitness evaluations.
Combining the improvements, delay reductions between 13%-32% were obtained over the
standard approaches. A coding scheme allowing for complete optimization of signal phasing is
proposed and a statistical model for comparing genetic algorithm optimization efficiency on
stochastic functions is also introduced. Alternative delay measurements, amendments to genetic
operators and modifications to the CHC algorithm are also suggested.
|
606 |
Pollard's rho methodBucic, Ida January 2019 (has links)
In this work we are going to investigate a factorization method that was invented by John Pollard. It makes possible to factorize medium large integers into a product of prime numbers. We will run a C++ program and test how do different parameters affect the results. There will be a connection drawn between the Pollard's rho method, the Birthday paradox and the Floyd's cycle finding algorithm. In results we will find a polynomial function that has the best effectiveness and performance for Pollard's rho method.
|
607 |
Um algoritmo evolutivo para o problema de dimensionamento de lotes em fundições de mercado / An evolutionary algorithm to the lot-sizing in market foundriesCamargo, Victor Claudio Bento de 16 March 2009 (has links)
Segundo uma pesquisa recente realizada junto ao setor de fundições, uma importante preocupação do setor é melhorar seu planejamento de produção. Um plano de produção em uma fundição envolve duas etapas interdependentes: a determinação das ligas a serem fundidas e dos lotes que serão produzidos. Neste trabalho, estudamos o problema de dimensionamento de lotes para fundições de pequeno porte, cujo objetivo é determinar um plano de produção de mínimo custo. Como sugerido na literatura, a heurística proposta trata as etapas do problema de forma hierárquica: inicialmente são definidas as ligas e, posteriormente, os lotes que são produzidos a partir delas. Para a solução do problema, propomos um algoritmo genético que explora um conjunto de possibilidades para a determinação das ligas e utiliza uma heurística baseada em relaxação lagrangiana para determinação dos itens a serem produzidos. Além disso, uma abordagem para o mesmo problema é proposta utilizando o problema da mochila para determinar os itens a serem produzidos. Bons resultados foram obtidos pelos métodos propostos / According to a recent research made by the foundry sector, one of the most concern of the industry is to improve its production planning. A foundry production plan involves two independent stages: the determination of alloys to be merged and the lots that will be produced. In this work, we studied the lot-sizing problem for small foundries, whose purpose is to determine a plan of minimum production cost. As suggested in the literature, the heuristic proposed addresses the problem stages in a hierarchical way: rst we dene the alloys and, subsequently, the lots that are produced from them. We propose a genetic algorithm that explores some possible sets of alloys produced and uses a Lagrangian heuristic to determine the items to be produced. Also, we propose one approach to the same problem that uses the knapsack problem to determine the items to be produced. Good results were obtained by the methods proposed
|
608 |
Estudo e aplicação do algoritmo FDK para a reconstrução de imagens tomográficas multi-cortes / Study and application of the FDK algorithm for multi-slice tomographic images reconstructionAraujo, Ericky Caldas de Almeida 29 October 2008 (has links)
O presente projeto consistiu no estudo e aplicação do algoritmo FDK (Feldkamp-Davis-Kress) para a reconstrução de imagens tomográficas utilizando a geometria de feixe cônico, resultando na implementação de um sistema adaptado de tomografia computadorizada multicortes (TCMC). Para a aquisição das projeções, utilizou-se uma plataforma giratória com goniômetro acoplado, um equipamento de raios X e um detector digital, tipo CCD. Para processar a reconstrução das imagens, foi utilizado um PC, no qual foi implementado o algoritmo FDK. Inicialmente foi aplicado o algoritmo FDK original, no qual se assume o caso físico ideal no processo de medições. Em seguida, foram incorporadas ao algoritmo, algumas correções de artefatos relacionados ao processo de medição das projeções. Para testar o funcionamento do algoritmo implementado, foram feitas reconstruções a partir de projeções simuladas computacionalmente. Foram montados e testados três sistemas de aquisição de projeções, nos quais foram usados diferentes equipamentos de raios X, detectores, metodologias e técnicas radiográficas, a fim de garantir que fossem coletados os melhores conjuntos possíveis de projeções. Realizou-se a calibração do sistema de TCMC implementado. Para isso, utilizou-se um objeto com uma distribuição de coeficientes de atenuação linear conhecida, que foi projetado e fabricado especificamente para isto. Por fim, o sistema de TCMC implementado foi utilizado na reconstrução tomográfica de um objeto não homogêneo, cuja distribuição volumétrica do coeficiente de atenuação linear é desconhecida. As imagens reconstruídas foram analisadas a fim de avaliar o desempenho do sistema de TCMC implementado. / This work consisted on the study and application of the FDK (Feldkamp-Davis-Kress) algorithm for tomographic image reconstruction using cone-beam geometry, resulting on the implementation of an adapted multi-slice computed tomography (MSCT) system. For the acquisition of the projections, a rotating platform coupled to a goniometer, an x-ray equipment and a CCD type digital detector were used. The FDK algorithm was implemented on a PC which was used for the reconstruction process. Initially, the original FDK algorithm was applied considering only the ideal physical conditions in the measurement process. Then some artifacts corrections related to the projections measurement process were incorporated. Computational simulations were performed to test the functioning of the implemented algorithm. Three projections acquisition systems, which used different x-ray equipments, detectors, methodologies and radiographic techniques, were assembled and tested in order to ensure that the best possible set of data was collected. The implemented MSCT system was calibrated. A specially designed and manufactured object with a known linear attenuation coefficient distribution was used for this purpose. Finally, the implemented MSCT system was used for multi-slice tomographic reconstruction of an inhomogeneous object, whose attenuation coefficient distribution was unknown. The reconstructed images were analyzed to assess the performance of the TCMC system that was implemented.
|
609 |
Decision support using Bayesian networks for clinical decision makingOgunsanya, Oluwole Victor January 2012 (has links)
This thesis investigates the use of Bayesian Networks (BNs), augmented by the Dynamic Discretization Algorithm, to model a variety of clinical problems. In particular, the thesis demonstrates four novel applications of BN and dynamic discretization to clinical problems. Firstly, it demonstrates the flexibility of the Dynamic Discretization Algorithm in modeling existing medical knowledge using appropriate statistical distributions. Many practical applications of BNs use the relative frequency approach while translating existing medical knowledge to a prior distribution in a BN model. This approach does not capture the full uncertainty surrounding the prior knowledge. Secondly, it demonstrates a novel use of the multinomial BN formulation in learning parameters of categorical variables. The traditional approach requires fixed number of parameters during the learning process but this framework allows an analyst to generate a multinomial BN model based on the number of parameters required. Thirdly, it presents a novel application of the multinomial BN formulation and dynamic discretization to learning causal relations between variables. The idea is to consider competing causal relations between variables as hypotheses and use data to identify the best hypothesis. The result shows that BN models can provide an alternative to the conventional causal learning techniques. The fourth novel application is the use of Hierarchical Bayesian Network (HBN) models, augmented by dynamic discretization technique, to meta-analysis of clinical data. The result shows that BN models can provide an alternative to classical meta analysis techniques. The thesis presents two clinical case studies to demonstrate these novel applications of BN models. The first case study uses data from a multi-disciplinary team at the Royal London hospital to demonstrate the flexibility of the multinomial BN framework in learning parameters of a clinical model. The second case study demonstrates the use of BN and dynamic discretization to solving decision problem. In summary, the combination of the Junction Tree Algorithm and Dynamic Discretization Algorithm provide a unified modeling framework for solving interesting clinical problems.
|
610 |
Boundary conditions in Abelian sandpilesGamlin, Samuel January 2016 (has links)
The focus of this thesis is to investigate the impact of the boundary conditions on configurations in the Abelian sandpile model. We have two main results to present in this thesis. Firstly we give a family of continuous, measure preserving, almost one-to-one mappings from the wired spanning forest to recurrent sandpiles. In the special case of $Z^d$, $d \geq 2$, we show how these bijections yield a power law upper bound on the rate of convergence to the sandpile measure along any exhaustion of $Z^d$. Secondly we consider the Abelian sandpile on ladder graphs. For the ladder sandpile measure, $\nu$, a recurrent configuration on the boundary, I, and a cylinder event, E, we provide an upper bound for $\nu(E|I) − \nu(E)$.
|
Page generated in 0.0535 seconds