• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 22
  • 3
  • 1
  • 1
  • 1
  • Tagged with
  • 44
  • 44
  • 20
  • 14
  • 12
  • 11
  • 10
  • 9
  • 9
  • 7
  • 6
  • 6
  • 6
  • 5
  • 5
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Morphological image segmentation for co-aligned multiple images using watersheds transformation

Yu, Hyun Geun, Roberts, Rodney G. January 2004 (has links)
Thesis (M.S.)--Florida State University, 2004. / Advisor: Dr. Rodney G. Roberts, Florida State University, College of Engineering, Dept. of Electrical and Computer Engineering. Title and description from dissertation home page (Jan. 20, 2005). Includes bibliographical references.
12

Performance analysis of new algorithms for routing in mobile ad-hoc networks : the development and performance evaluation of some new routing algorithms for mobile ad-hoc networks based on the concepts of angle direction and node density

Elazhari, Mohamed S. January 2010 (has links)
Mobile Ad hoc Networks (MANETs) are of great interest to researchers and have become very popular in the last few years. One of the great challenges is to provide a routing protocol that is capable of offering the shortest and most reliable path in a MANET in which users are moving continuously and have no base station to be used as a reference for their position. This thesis proposes some new routing protocols based on the angles (directions) of the adjacent mobile nodes and also the node density. In choosing the next node in forming a route, the neighbour node with the closest heading angle to that of the node of interest is selected, so the connection between the source and the destination consists of a series of nodes that are moving in approximately the same direction. The rationale behind this concept is to maintain the connection between the nodes as long as possible. This is in contrast to the well known hop count method, which does not consider the connection lifetime. We propose three enhancements and modifications of the Ad-hoc on demand distance vector (AODV) protocol that can find a suitable path between source and destination using combinations and prioritization of angle direction and hop count. Firstly, we consider that if there are multiple routing paths available, the path with the minimum hop count is selected and when the hop counts are the same the path with the best angle direction is selected. Secondly, if multiple routing paths are available the paths with the best angle direction are chosen but if the angles are the same (fall within the same specified segment), the path with minimum hop count is chosen. Thirdly, if there is more than one path available, we calculate the average of all the heading angles in every path and find the best one (lowest average) from the source to the destination. In MANETs, flooding is a popular message broadcasting technique so we also propose a new scheme for MANETS where the value of the rebroadcast packets for every host node is dynamically adjusted according to the number of its neighbouring nodes. A fixed probabilistic scheme algorithm that can dynamically adjust the rebroadcasting probability at a given node according to its ID is also proposed; Fixed probabilistic schemes are one of the solutions to reduce rebroadcasts and so alleviate the broadcast storm problem. Performance evaluation of the proposed schemes is conducted using the Global Mobile Information System (GloMoSim) network simulator and varying a number of important MANET parameters, including node speed, node density, number of nodes and number of packets, all using a Random Waypoint (RWP) mobility model. Finally, we measure and compare the performance of all the proposed approaches by evaluating them against the standard AODV routing protocol. The simulation results reveal that the proposed approaches give relatively comparable overall performance but which is better than AODV for almost all performance measures and scenarios examined.
13

Combinação afim de algoritmos adaptativos. / Affine combination of adaptive algorithms.

Candido, Renato 13 April 2009 (has links)
A combinação de algoritmos tem despertado interesse para melhorar o desempenho de filtros adaptativos. Esse método consiste em combinar linearmente as saídas de dois filtros operando em paralelo com passos de adaptação diferentes para se obter um filtro com conver- gência rápida e um erro quadrático médio em excesso (EMSE - excess mean squared error) reduzido. Nesse contexto, foi proposta a combinação afim de dois algoritmos LMS (least-mean square), cujo parâmetro de mistura não fica restrito ao intervalo [0, 1] e por isso é considerada como uma generalização da combinação convexa. Neste trabalho, a combinação afim de dois algoritmos LMS é estendida para os algoritmos supervisionados NLMS (normalized LMS) e RLS (recursive least squares) e também para equalização autodidata, usando o CMA (constant modulus algorithm). Foi feita uma análise em regime da combinação afim desses algoritmos de forma unificada, considerando entrada branca ou colorida e ambientes estacionários ou não- estacionários. Através dessa análise, verificou-se que a combinação afim de dois algoritmos da mesma família pode apresentar uma redução de EMSE de até 3 dB em relação ao EMSE de seus filtros componentes e conseqüentemente ao EMSE da combinação convexa. Para garantir que a estimativa combinada seja pelo menos tão boa quanto a do melhor filtro componente, foram propostos e analisados três novos algoritmos para adaptação do parâmetro de mistura. Utilizando resultados da análise desses algoritmos em conjunto com os resultados da análise de transitório de filtros adaptativos, analisou-se o comportamento transitório da combinação afim. Através de simulações, observou-se uma boa concordância entre os resultados analíticos e os de simulação. No caso de equalização autodidata, também foi proposta uma combinação de dois equalizadores CMA com inicializações diferentes. Verificou-se através de simulações que em alguns casos a combinação afim é capaz de evitar a convergência para mínimos locais da função custo do módulo constante. / In order to improve the performance of adaptive filters, the combination of algorithms is receiving much attention in the literature. This method combines linearly the outputs of two filters operating in parallel with different step-sizes to obtain an adaptive filter with fast convergence and reduced excess mean squared error (EMSE). In this context, it was proposed an affine combination of two least-mean square (LMS) filters, whose mixing parameter is not restricted to the interval [0, 1]. Hence, the affine combination is a generalization of the convex combination. In this work, the affine combination of two LMS algorithms is extended to the supervised algorithms NLMS (normalized LMS) and RLS (recursive least squares), and also to blind equalization, using the constant modulus algorithm (CMA). A steady-state analysis of the affine combination of the considered algorithms is presented in a unified manner, assuming white or colored inputs, and stationary or nonstationary environments. Through the analysis, it was observed that the affine combination of two algorithms of the same family can provide a 3 dB EMSE gain in relation to its best component filter and consequently in relation to the convex combination. To ensure that the combined estimate is at least as good as the best of the component filters, three new algorithms to adapt the mixing parameter were proposed and analyzed. Using the analysis results of these algorithms in conjunction with the results of the transient analysis of adaptive filters, the transient behavior of the affine combination was analyzed. Through simulations, a good agreement between analytical and experimental results was always observed. In the blind equalization case, a combination of two CMA equalizers with different initializations was also proposed. The simulation results suggest that the affine combination can avoid local minima of the constant modulus cost function.
14

The Maximum Displacement for Linear Probing Hashing

Petersson, Niclas January 2009 (has links)
In this thesis we study the standard probabilistic model for hashing with linear probing. The main purpose is to determine the asymptotic distribution for the maximum displacement. Depending on the ratio between the number of items and the number of cells, there are several cases to consider. Paper I solves the problem for the special case of almost full hash tables. That is, hash tables where every cell but one is occupied. Paper II completes the analysis by solving the problem for all remaining cases. That is, for every case where the number of items divided by the number of cells lies in the interval [0,1]. The last two papers treat quite different topics. Paper III studies the area covered by the supremum process of Brownian motion. One of the main theorems in Paper I is expressed in terms of the Laplace transform of this area. Paper IV provides a new sufficient condition for a collection of independent random variables to be negatively associated when conditioned on their total sum. The condition applies to a collection of independent Borel-distributed random variables, which made it possible to prove a Poisson approximation that where essential for the completion of Paper II.
15

Alternative Measures for the Analysis of Online Algorithms

Dorrigiv, Reza 26 February 2010 (has links)
In this thesis we introduce and evaluate several new models for the analysis of online algorithms. In an online problem, the algorithm does not know the entire input from the beginning; the input is revealed in a sequence of steps. At each step the algorithm should make its decisions based on the past and without any knowledge about the future. Many important real-life problems such as paging and routing are intrinsically online and thus the design and analysis of online algorithms is one of the main research areas in theoretical computer science. Competitive analysis is the standard measure for analysis of online algorithms. It has been applied to many online problems in diverse areas ranging from robot navigation, to network routing, to scheduling, to online graph coloring. While in several instances competitive analysis gives satisfactory results, for certain problems it results in unrealistically pessimistic ratios and/or fails to distinguish between algorithms that have vastly differing performance under any practical characterization. Addressing these shortcomings has been the subject of intense research by many of the best minds in the field. In this thesis, building upon recent advances of others we introduce some new models for analysis of online algorithms, namely Bijective Analysis, Average Analysis, Parameterized Analysis, and Relative Interval Analysis. We show that they lead to good results when applied to paging and list update algorithms. Paging and list update are two well known online problems. Paging is one of the main examples of poor behavior of competitive analysis. We show that LRU is the unique optimal online paging algorithm according to Average Analysis on sequences with locality of reference. Recall that in practice input sequences for paging have high locality of reference. It has been empirically long established that LRU is the best paging algorithm. Yet, Average Analysis is the first model that gives strict separation of LRU from all other online paging algorithms, thus solving a long standing open problem. We prove a similar result for the optimality of MTF for list update on sequences with locality of reference. A technique for the analysis of online algorithms has to be effective to be useful in day-to-day analysis of algorithms. While Bijective and Average Analysis succeed at providing fine separation, their application can be, at times, cumbersome. Thus we apply a parameterized or adaptive analysis framework to online algorithms. We show that this framework is effective, can be applied more easily to a larger family of problems and leads to finer analysis than the competitive ratio. The conceptual innovation of parameterizing the performance of an algorithm by something other than the input size was first introduced over three decades ago [124, 125]. By now it has been extensively studied and understood in the context of adaptive analysis (for problems in P) and parameterized algorithms (for NP-hard problems), yet to our knowledge this thesis is the first systematic application of this technique to the study of online algorithms. Interestingly, competitive analysis can be recast as a particular form of parameterized analysis in which the performance of opt is the parameter. In general, for each problem we can choose the parameter/measure that best reflects the difficulty of the input. We show that in many instances the performance of opt on a sequence is a coarse approximation of the difficulty or complexity of a given input sequence. Using a finer, more natural measure we can separate paging and list update algorithms which were otherwise indistinguishable under the classical model. This creates a performance hierarchy of algorithms which better reflects the intuitive relative strengths between them. Lastly, we show that, surprisingly, certain randomized algorithms which are superior to MTF in the classical model are not so in the parameterized case, which matches experimental results. We test list update algorithms in the context of a data compression problem known to have locality of reference. Our experiments show MTF outperforms other list update algorithms in practice after BWT. This is consistent with the intuition that BWT increases locality of reference.
16

Alternative Measures for the Analysis of Online Algorithms

Dorrigiv, Reza 26 February 2010 (has links)
In this thesis we introduce and evaluate several new models for the analysis of online algorithms. In an online problem, the algorithm does not know the entire input from the beginning; the input is revealed in a sequence of steps. At each step the algorithm should make its decisions based on the past and without any knowledge about the future. Many important real-life problems such as paging and routing are intrinsically online and thus the design and analysis of online algorithms is one of the main research areas in theoretical computer science. Competitive analysis is the standard measure for analysis of online algorithms. It has been applied to many online problems in diverse areas ranging from robot navigation, to network routing, to scheduling, to online graph coloring. While in several instances competitive analysis gives satisfactory results, for certain problems it results in unrealistically pessimistic ratios and/or fails to distinguish between algorithms that have vastly differing performance under any practical characterization. Addressing these shortcomings has been the subject of intense research by many of the best minds in the field. In this thesis, building upon recent advances of others we introduce some new models for analysis of online algorithms, namely Bijective Analysis, Average Analysis, Parameterized Analysis, and Relative Interval Analysis. We show that they lead to good results when applied to paging and list update algorithms. Paging and list update are two well known online problems. Paging is one of the main examples of poor behavior of competitive analysis. We show that LRU is the unique optimal online paging algorithm according to Average Analysis on sequences with locality of reference. Recall that in practice input sequences for paging have high locality of reference. It has been empirically long established that LRU is the best paging algorithm. Yet, Average Analysis is the first model that gives strict separation of LRU from all other online paging algorithms, thus solving a long standing open problem. We prove a similar result for the optimality of MTF for list update on sequences with locality of reference. A technique for the analysis of online algorithms has to be effective to be useful in day-to-day analysis of algorithms. While Bijective and Average Analysis succeed at providing fine separation, their application can be, at times, cumbersome. Thus we apply a parameterized or adaptive analysis framework to online algorithms. We show that this framework is effective, can be applied more easily to a larger family of problems and leads to finer analysis than the competitive ratio. The conceptual innovation of parameterizing the performance of an algorithm by something other than the input size was first introduced over three decades ago [124, 125]. By now it has been extensively studied and understood in the context of adaptive analysis (for problems in P) and parameterized algorithms (for NP-hard problems), yet to our knowledge this thesis is the first systematic application of this technique to the study of online algorithms. Interestingly, competitive analysis can be recast as a particular form of parameterized analysis in which the performance of opt is the parameter. In general, for each problem we can choose the parameter/measure that best reflects the difficulty of the input. We show that in many instances the performance of opt on a sequence is a coarse approximation of the difficulty or complexity of a given input sequence. Using a finer, more natural measure we can separate paging and list update algorithms which were otherwise indistinguishable under the classical model. This creates a performance hierarchy of algorithms which better reflects the intuitive relative strengths between them. Lastly, we show that, surprisingly, certain randomized algorithms which are superior to MTF in the classical model are not so in the parameterized case, which matches experimental results. We test list update algorithms in the context of a data compression problem known to have locality of reference. Our experiments show MTF outperforms other list update algorithms in practice after BWT. This is consistent with the intuition that BWT increases locality of reference.
17

Fast and Accurate Visibility Preprocessing

Nirenstein, Shaun 01 October 2003 (has links)
Visibility culling is a means of accelerating the graphical rendering of geometric models. Invisible objects are efficiently culled to prevent their submission to the standard graphics pipeline. It is advantageous to preprocess scenes in order to determine invisible objects from all possible camera views. This information is typically saved to disk and may then be reused until the model geometry changes. Such preprocessing algorithms are therefore used for scenes that are primarily static. Currently, the standard approach to visibility preprocessing algorithms is to use a form of approximate solution, known as conservative culling. Such algorithms over-estimate the set of visible polygons. This compromise has been considered necessary in order to perform visibility preprocessing quickly. These algorithms attempt to satisfy the goals of both rapid preprocessing and rapid run-time rendering. We observe, however, that there is a need for algorithms with superior performance in preprocessing, as well as for algorithms that are more accurate. For most applications these features are not required simultaneously. In this thesis we present two novel visibility preprocessing algorithms, each of which is strongly biased toward one of these requirements. The first algorithm has the advantage of performance. It executes quickly by exploiting graphics hardware. The algorithm also has the features of output sensitivity (to what is visible), and a logarithmic dependency in the size of the camera space partition. These advantages come at the cost of image error. We present a heuristic guided adaptive sampling methodology that minimises this error. We further show how this algorithm may be parallelised and also present a natural extension of the algorithm to five dimensions for accelerating generalised ray shooting. The second algorithm has the advantage of accuracy. No over-estimation is performed, nor are any sacrifices made in terms of image quality. The cost is primarily that of time. Despite the relatively long computation, the algorithm is still tractable and on average scales slightly superlinearly with the input size. This algorithm also has the advantage of output sensitivity. This is the first known tractable exact solution to the general 3D from-region visibility problem. In order to solve the exact from-region visibility problem, we had to first solve a more general form of the standard stabbing problem. An efficient solution to this problem is presented independently.
18

Hidden Markov models for remote protein homology detection /

Wistrand, Markus, January 2005 (has links)
Diss. (sammanfattning) Stockholm : Karol. inst., 2006. / Härtill 4 uppsatser.
19

Algorithms for building and evaluating multiple sequence alignments /

Lassmann, Timo, January 2006 (has links)
Diss. (sammanfattning) Stockholm : Karolinska institutet, 2006. / Härtill 6 uppsatser.
20

Sink free orientations in a graph

Sivanathan, Gowrishankar. January 2009 (has links)
Thesis (M.S.)--State University of New York at Binghamton, Thomas J. Watson School of Engineering and Applied Science, Department of Computer Science, 2009. / Includes bibliographical references.

Page generated in 0.0693 seconds