Spelling suggestions: "subject:"algorithm"" "subject:"allgorithm""
531 |
Hierarchical Logcut : A Fast And Efficient Way Of Energy Minimization Via Graph CutsKulkarni, Gaurav 06 1900 (has links) (PDF)
Graph cuts have emerged as an important combinatorial optimization tool for many problems in vision. Most of the computer vision problems are discrete labeling problems. For example, in stereopsis, labels represent disparity and in image restoration, labels correspond to image intensities. Finding a good labeling involves optimization of an Energy Function. In computer vision, energy functions for discrete labeling problems can be elegantly formulated through Markov Random Field (MRF) based modeling and graph cut algorithms have been found to efficiently optimize wide class of such energy functions.
The main contribution of this thesis lies in developing an efficient combinatorial optimization algorithm which can be applied to a wide class of energy functions. Generally, graph cut algorithms deal sequentially with each label in the labeling problem at hand. The time complexity of these algorithms increases linearly with number of labels. Our algorithm, finds a solution/labeling in logarithmic time complexity without compromising on quality of solution.
In our work, we present an improved Logcut algorithm [24]. Logcut algorithm [24]
deals with finding individual bit values in integer representation of labels. It has logarithmic time complexity, but requires training over data set. Our improved Logcut (Heuristic-Logcut or H-Logcut) algorithm eliminates the need for training and obtains comparable results in respect to original Logcut algorithm.
Original Logcut algorithm cannot be initialized by a known labeling. We present a
new algorithm, Sequential Bit Plane Correction (SBPC) which overcomes this drawback of Logcut algorithm. SBPC algorithm starts from a known labeling and individually corrects each bit of a label. This algorithm too has logarithmic time complexity. SBPC in combination with H-Logcut algorithm, further improves rate of convergence and quality of results.
Finally, a hierarchical approach to graph cut optimization is used to further improve on rate of convergence of our algorithm. Generally, in a hierarchical approach first, a solution at coarser level is computed and then its result is used to initialize algorithm at a finer level. Here we have presented a novel way of initializing the algorithm at finer level through fusion move [25]. The SBPC and H-Logcut algorithms are extended to accommodate for hierarchical approach. It is found that this approach drastically improves the rate of convergence and attains a very low energy labeling.
The effectiveness of our approach is demonstrated on stereopsis. It is found that the algorithm significantly out performs all existing algorithms in terms of quality of solution as well as rate of convergence.
|
532 |
Využití distribuovaných a stochastických algoritmů v síti / Application of distributed and stochastic algorithms in network.Yarmolskyy, Oleksandr January 2018 (has links)
This thesis deals with the distributed and stochastic algorithms including testing their convergence in networks. The theoretical part briefly describes above mentioned algorithms, including their division, problems, advantages and disadvantages. Furthermore, two distributed algorithms and two stochastic algorithms are chosen. The practical part is done by comparing the speed of convergence on various network topologies in Matlab.
|
533 |
Low-density Parity-Check decoding Algorithms / Low-density Parity-Check avkodare algoritmPirou, Florent January 2004 (has links)
<p>Recently, low-density parity-check (LDPC) codes have attracted much attention because of their excellent error correcting performance and highly parallelizable decoding scheme. However, the effective VLSI implementation of and LDPC decoder remains a big challenge and is a crucial issue in determining how well we can exploit the benefits of the LDPC codes in the real applications. In this master thesis report, following a error coding background, we describe Low-Density Parity-Check codes and their decoding algorithm, and also requirements and architectures of LPDC decoder implementations.</p>
|
534 |
Detecting and preventing the electronic transmission of illicit imagesIbrahim, Amin Abdurahman 01 April 2009 (has links)
The sexual exploitation of children remains a very serious problem and is rapidly increasing globally through the use of the Internet. This work focuses on the current methods employed by criminals to generate and distribute child pornography, the methods used by law enforcement agencies to deter them, and the drawbacks of currently used methods, as well as the surrounding legal and privacy issues. A proven method to detect the transmission of illicit images at the network layer is presented within this paper. With this research, it is now possible to actively filter illicit pornographic images as they are transmitted over the network layer in real-time. It is shown that a Stochastic Learning Weak Estimator learning algorithm and a Maximum Likelihood Estimator learning algorithm can be applied against Linear Classifiers to identify and filter illicit pornographic images. In this thesis, these two learning algorithms were combined with algorithms such as the Non-negative Vector Similarity Coefficient-based Distance algorithm, Euclidian Distance, and Weighted Euclidian Distance. Based upon this research, a prototype was developed using the abovementioned system, capable of performing classification on both compressed and uncompressed images. Experimental results showed that classification accuracies and the overhead of network-based approaches did have a significant effect on routing devices. All images used in our experiments were legal. No actual child pornography images were ever collected, seen, sought, or used.
|
535 |
Low-density Parity-Check decoding Algorithms / Low-density Parity-Check avkodare algoritmPirou, Florent January 2004 (has links)
Recently, low-density parity-check (LDPC) codes have attracted much attention because of their excellent error correcting performance and highly parallelizable decoding scheme. However, the effective VLSI implementation of and LDPC decoder remains a big challenge and is a crucial issue in determining how well we can exploit the benefits of the LDPC codes in the real applications. In this master thesis report, following a error coding background, we describe Low-Density Parity-Check codes and their decoding algorithm, and also requirements and architectures of LPDC decoder implementations.
|
536 |
Performance Comparison of Selective Rake Receivers with CLEAN Algorithms in UWB SystemsYang, Siang-Yu 26 July 2006 (has links)
The Ultra-Wideband (UWB) channel is a dense multipath channel. The system performance and design complexity issues of selective-Rake receiver (SRake) are studied. Rake receiver has difficulties achieving desired system performance in the dense multipath environment. The main ideas of SRake receiver are to obtain the SNR level on known multipath channel and determine the desired number of Rake fingers. In the implementation of the SRake, the CLEAN algorithm is used in selecting the paths with relatively high energy. We can improve the performance of SRake receiver by increasing the accuracy of path selection. By the property of local maximum peak within the smaller partition, Two-Stage CLEAN algorithm acquires the more accurate delay time of multipath. In order to mitigate the sidelobe effect and noise interference, the key assumption in the Deng¡¦s Modified CLEAN algorithm is that using average amplitude around the considered data change as the criterion to determine if the data value is a true path. In this thesis, we investigate CLEAN, Two-Stage CLEAN and Deng¡¦s Modified CLEAN algorithm in three different systems including UWB-Impulse Radio, Pulse Radar and DS-UWB. From the performance comparison, it can be seen that the Two-Stage CLEAN algorithm that has the highest accuracy of path selection in UWB system.
|
537 |
Novel Methods for Primality Testing and FactoringHammad, Yousef Bani January 2005 (has links)
From the time of the Greeks, primality testing and factoring have fascinated mathematicians, and for centuries following the Greeks primality testing and factorization were pursued by enthusiasts and professional mathematicians for their intrisic value. There was little practical application. One example application was to determine whether or not the Fermat numbers, that is, numbers of the form F;, = 2'" + 1 were prime. Fermat conjectured that for all n they were prime. For n = 1,2,3,4, the Fermat numbers are prime, but Euler showed that F; was not prime and to date no F,, n 2 5 has been found to be prime. Thus, for nearly 2000 years primality testing and factorization was largely pure mathematics. This all changed in the mid 1970's with the advent of public key cryptography. Large prime numbers are used in generating keys in many public key cryptosystems and the security of many of these cryptosystems depends on the difficulty of factoring numbers with large prime factors. Thus, the race was on to develop new algorithms to determine the primality or otherwise of a given large integer and to determine the factors of given large integers. The development of such algorithms continues today. This thesis develops both of these themes. The first part of this thesis deals with primality testing and after a brief introduction to primality testing a new probabilistic primality algorithm, ALI, is introduced. It is analysed in detail and compared to Fermat and Miller-Rabin primality tests. It is shown that the ALI algorithm is more efficient than the Miller-Rabin algorithm in some aspects. The second part of the thesis deals with factoring and after looking closely at various types of algorithms a new algorithm, RAK, is presented. It is analysed in detail and compared with Fermat factorization. The RAK algorithm is shown to be significantly more efficient than the Fermat factoring algorithm. A number of enhancements is made to the basic RAK algorithm in order to improve its performance. The RAK algorithm with its enhancements is known as IMPROVEDRAK. In conjunction with this work on factorization an improvement to Shor's factoring algorithm is presented. For many integers Shor's algorithm uses a quantum computer multiple times to factor a composite number into its prime factors. It is shown that Shor's alorithm can be modified in a way such that the use of a quantum computer is required just once. The common thread throughout this thesis is the application of factoring and primality testing techniques to integer types which commonly occur in public key cryptosystems. Thus, this thesis contributes not only in the area of pure mathematics but also in the very contemporary area of cryptology.
|
538 |
Expansão estática de sistemas de transmissão de energia elétrica via FPANeves, Patrícia Silva 31 August 2017 (has links)
Submitted by Geandra Rodrigues (geandrar@gmail.com) on 2017-12-22T14:54:33Z
No. of bitstreams: 1
patriciasilvaneves.pdf: 1941458 bytes, checksum: 16ab3b743d0b75134d320f08de292905 (MD5) / Approved for entry into archive by Adriana Oliveira (adriana.oliveira@ufjf.edu.br) on 2018-01-22T18:33:39Z (GMT) No. of bitstreams: 1
patriciasilvaneves.pdf: 1941458 bytes, checksum: 16ab3b743d0b75134d320f08de292905 (MD5) / Made available in DSpace on 2018-01-22T18:33:39Z (GMT). No. of bitstreams: 1
patriciasilvaneves.pdf: 1941458 bytes, checksum: 16ab3b743d0b75134d320f08de292905 (MD5)
Previous issue date: 2017-08-31 / O presente trabalho apresenta a aplicação conjunta de uma técnica de otimização bioinspirada e de um Algoritmo Heurístico Construtivo (AHC) na resolução do problema de planejamento estático da expansão de sistemas de transmissão de energia elétrica. O algoritmo bioinspirado utilizado é uma versão modificada do Flower Pollination Algorithm (FPA), no qual foi introduzido o operador de seleção clonal, oriundo do Algoritmo de Seleção Clonal (CLONALG), com o objetivo de potencializar o processo de busca local do FPA. A versão modificada proposta neste trabalho foi nomeada de Clonal Flower Pollination Algorithm (CFPA). O CFPA realiza a otimização da expansão de sistemas de transmissão de energia elétrica, determinando, entre um conjunto de linhas (circuitos) de transmissão previamente definidas, quais devem ser construídas de modo a minimizar os custos de investimento e de operação do sistema elétrico, suprindo a demanda prevista para um dado horizonte de planejamento. De modo a aumentar a eficiência do processo de busca pelo CFPA, fez-se o uso de informações provenientes de um Algoritmo Heurístico Construtivo. Tais informações heurísticas são utilizadas na inicialização do CFPA e também na seleção de um conjunto reduzido das rotas mais relevantes à expansão, reduzindo o espaço de busca. Para aferir os resultados da metodologia proposta foram simulados os sistemas Garver, IEEE 24 Barras e o equivalente da região Sul do Brasil. Diante dos resultados, pode-se verificar que tanto a inclusão do operador de seleção clonal quanto as informações heurísticas foram capazes de aumentar a eficiência do FPA na resolução do problema aqui em estudo. / This work presents the application of a bio-inspired algorithm, together with a Heuristic Constructive Algorithm (HCA) in the solution of a power system static transmission expansion planning problem. The algorithm used is a modified version of the Flower Pollination Algorithm (FPA) that includes a clonal selection operator, from the clonal selection algorithm (CLONALG) that aims to improve the FPA local search process. The modified version proposed is entitled Clonal Flower Pollination Algorithm (CFPA). The CFPA realizes the power system transmission expansion planning, that is, it determines between a set of predefined transmission lines (circuits), which of them must be constructed in order to minimize the power systems investments and operation costs, while meeting the forecast demand in a given planning horizon. In order to increase the efficiency of the search process by the CFPA, information from an HCA has been utilized. That heuristic information has been used in the initialization process of the CFPA and also in the selection of a reduced set of most relevant lines candidates to the expansion plan, thus reducing the search space. To evaluate the results of the proposed methodology, the Garver, IEEE 24 Buses and South Brazilian Systems were simulated. Considering the results it can be verified that both the inclusion of the clonal selection algorithm and the heuristic information were able to increase the efficiency of the FPA in solving this problem.
|
539 |
Algoritmo híbrido aplicado ao planejamento da expansão de redes aéreas de média tensão / Hybrid algorithm applied to the plannning of the expansion of mediun voltage aerial networksCuno, Miguel Angel Sánchez 16 August 2016 (has links)
Submitted by Miriam Lucas (miriam.lucas@unioeste.br) on 2018-02-22T16:42:27Z
No. of bitstreams: 2
Miguel_Angel_Sanchez_Cuno_2016.pdf: 1159111 bytes, checksum: 5e8f5e6fcd310a19270e2164cb09c3e3 (MD5)
license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5) / Made available in DSpace on 2018-02-22T16:42:27Z (GMT). No. of bitstreams: 2
Miguel_Angel_Sanchez_Cuno_2016.pdf: 1159111 bytes, checksum: 5e8f5e6fcd310a19270e2164cb09c3e3 (MD5)
license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5)
Previous issue date: 2016-08-16 / Fundação Parque Tecnológico de Itaipu / This work presents the development of a Hybrid Algorithm to solve the problem of Planning
the Expansion of Medium Voltage Overhead Networks. The Hybrid Algorithm uses two
strategies to solve the problem. First uses a Constructive Heuristic Algorithm that tries to
work with parameters instead of working with variables, with the objective of reducing the
convergence time to the research process trying not to impair the quality of the solution. The
second strategy is based in a Branch and Bound Algorithm, that uses the solution of the
problem obtained as a starting point while the first strategy is running. Thus, this solution is
used like incumbent in the second process. In this context the hybrid algorithm developed and
implemented in this work, takes advantage of reducing the convergence time of the
Constructive Heuristic Algorithm and the advantage of guarantee that the solution has the best
quality, which are the solutions produced by algorithms type Branch and Bound. The
Algorithm has been tested in three test systems, being established a plan to expand overhead
medium voltage networks for each system. / Neste trabalho é apresentado um Algoritmo Híbrido para resolver o problema de
Planejamento da Expansão de Redes Aéreas de Média Tensão. O Algoritmo Híbrido utiliza
duas estratégias para resolver o problema. A primeira utiliza um Algoritmo Heurístico
Construtivo que procura trabalhar com parâmetros ao invés de trabalhar com variáveis, com o
objetivo de reduzir o tempo de convergência do processo de busca procurando não prejudicar
a qualidade da solução. A segunda estratégia é baseada em um Algoritmo do tipo Branch and
Bound, que utiliza a solução do problema obtida durante a execução da primeira estratégia
como um ponto de partida. Assim, esta solução é usada como incumbente neste segundo
processo. Neste contexto, o Algoritmo Híbrido desenvolvido e implementado neste trabalho,
aproveita a vantagem de reduzir o tempo de convergência do Algoritmo Heurístico
Construtivo e a vantagem de garantir que a solução seja a de melhor qualidade, que são as
soluções produzidas por algoritmos do tipo Branch and Bound. O Algoritmo foi testado em
três sistemas testes, sendo estabelecido um plano para a expansão de redes aéreas de média
tensão para cada sistema
|
540 |
Topics in Network Utility Maximization : Interior Point and Finite-step MethodsAkhil, P T January 2017 (has links) (PDF)
Network utility maximization has emerged as a powerful tool in studying flow control, resource allocation and other cross-layer optimization problems. In this work, we study a flow control problem in the optimization framework. The objective is to maximize the sum utility of the users subject to the flow constraints of the network. The utility maximization is solved in a distributed setting; the network operator does not know the user utility functions and the users know neither the rate choices of other users nor the flow constraints of the network.
We build upon a popular decomposition technique proposed by Kelly [Eur. Trans. Telecommun., 8(1), 1997] to solve the utility maximization problem in the aforementioned distributed setting. The technique decomposes the utility maximization problem into a user problem, solved by each user and a network problem solved by the network. We propose an iterative algorithm based on this decomposition technique. In each iteration, the users communicate to the network their willingness to pay for the network resources. The network allocates rates in a proportionally fair manner based on the prices communicated by the users. The new feature of the proposed algorithm is that the rates allocated by the network remains feasible at all times. We show that the iterates put out by the algorithm asymptotically tracks a differential inclusion. We also show that the solution to the differential inclusion converges to the system optimal point via Lyapunov theory. We use a popular benchmark algorithm due to Kelly et al. [J. of the Oper. Res. Soc., 49(3), 1998] that involves fast user updates coupled with slow network updates in the form of additive increase and multiplicative decrease of the user flows. The proposed algorithm may be viewed as one with fast user update and fast network update that keeps the iterates feasible at all times. Simulations suggest that our proposed algorithm converges faster than the aforementioned benchmark algorithm.
When the flows originate or terminate at a single node, the network problem is the maximization of a so-called d-separable objective function over the bases of a
polymatroid. The solution is the lexicographically optimal base of the
polymatroid. We map the problem of finding the lexicographically optimal base of
a polymatroid to the geometrical problem of finding the concave cover of a set of points on a two-dimensional plane. We also describe an algorithm that finds the concave cover in linear time.
Next, we consider the minimization of a more general objective function, i.e., a separable convex function, over the bases of a polymatroid with a special structure. We propose a novel decomposition algorithm and show the proof of correctness and optimality of the algorithm via the theory of polymatroids. Further, motivated by the need to handle piece-wise linear concave utility functions, we extend the decomposition algorithm to handle the case when the separable convex functions are not continuously differentiable or not strictly convex. We then provide a proof of its correctness and optimality.
|
Page generated in 0.0484 seconds