Spelling suggestions: "subject:"3research algorithms"" "subject:"3research a.lgorithms""
21 |
Avaliação do uso de técnicas de agrupamento na busca e recuperação de imagensSilva Filho, Antonio Fernandes da 26 August 2016 (has links)
Submitted by Lara Oliveira (lara@ufersa.edu.br) on 2017-04-24T18:35:56Z
No. of bitstreams: 1
AntonioFSF_DISSERT.pdf: 3029657 bytes, checksum: e9a0a86884868c986d0b42d54a37134a (MD5) / Approved for entry into archive by Vanessa Christiane (referencia@ufersa.edu.br) on 2017-04-26T12:15:52Z (GMT) No. of bitstreams: 1
AntonioFSF_DISSERT.pdf: 3029657 bytes, checksum: e9a0a86884868c986d0b42d54a37134a (MD5) / Approved for entry into archive by Vanessa Christiane (referencia@ufersa.edu.br) on 2017-04-26T12:18:02Z (GMT) No. of bitstreams: 1
AntonioFSF_DISSERT.pdf: 3029657 bytes, checksum: e9a0a86884868c986d0b42d54a37134a (MD5) / Made available in DSpace on 2017-04-26T12:18:11Z (GMT). No. of bitstreams: 1
AntonioFSF_DISSERT.pdf: 3029657 bytes, checksum: e9a0a86884868c986d0b42d54a37134a (MD5)
Previous issue date: 2016-08-26 / Coordenação de Aperfeiçoamento de Pessoal de Nível Superior / Nowadays, almost all services and daily tasks involve some computational apparatus,
leading to creation and further accumulation of data. This progressive amount of data is
an important opportunity of exploration for scientific and commercial branches, which
started to value and to use this information more intense and objectively. Allied to this, the
natural process of public and private life exposure through social networks and electronic
devices tend to generate a significant amount of images that can and should be utilized
with various purposes, such as in public security. In this context, facial recognition has
advanced and attracted specific studies and applications, aiming at the identification of
individuals through parametric features. However, some barriers are still found, making
difficult the efficient performance of the operation, such as the computational cost on
the search time and recovery of large proportions in image databases. Based on this,
this paper proposes the use of clustering algorithms in the organization of image data,
thus providing a direction and “ shortening ” in facial images searches. More specifically,
an analysis related to the optimization is conducted imposed by the use of clustering
techniques applied in the automated organization of images, the preparative step for
performing searches. The proposed method was applied to real facial images databases
and used two clustering algorithms k-means and EM with variations for the similarity
measures (euclidean distance and Pearson correlation). The results show that the use of
clustering in data organization has proved to be efficient, leading to a significant reduction
in search time and without losses in process accuracy / Atualmente quase todos os serviços e tarefas cotidianas envolvem algum aparato
computacional, acarretando a criação e o consequente acúmulo de dados. Essa progressiva
quantidade de dados representa uma importante oportunidade de exploração para os
ramos científico e comercial, que passaram a valorizar e utilizar essas informações de
forma mais intensa e objetiva. Aliado a isso, o processo natural de exposição da
vida pública e privada através das redes sociais e dos dispositivos eletrônicos tende
a gerar uma quantidade expressiva de imagens que podem e devem ser aproveitadas
com os mais diversos fins, como na área de segurança pública. Nesse contexto, o
reconhecimento facial tem avançado e atraído estudos e aplicações específicas, que visam a
identificação de indivíduos por meio de características paramétricas. No entanto, alguns
entraves ainda são encontrados, dificultando a realização eficiente dessa operação, como
o custo computacional relativo ao tempo de busca e recuperação de imagens em bases
de dados de grandes proporções. Baseado nisso, este trabalho propõe a utilização de
algoritmos de agrupamento na organização dos dados de imagens, proporcionando assim
um direcionamento e “encurtamento” nas buscas de imagens faciais. Mais especificamente,
é feita uma análise relacionada à otimização imposta pelo uso de técnicas de agrupamento
aplicadas na organização automatizada das imagens, como etapa preparativa para realização
de buscas. O método proposto foi aplicado em bases de dados de imagens faciais reais, e
utilizou dois algoritmos de agrupamento (k-means e EM) com variações para as medidas de
similaridade (distância euclidiana e correlação de Pearson). Os resultados obtidos revelam
que o emprego do agrupamento na organização dos dados mostrou-se eficiente, levando a
uma redução significativa no tempo de busca, e sem prejuízos na precisão do processo / 2017-04-19
|
22 |
Multi-agent route planning for uncrewed aircraft systems operating in U-space airspaceAyoub, Yohan January 2023 (has links)
Society today brings a high pace development and demand of Artificial intelligence systems as well as robotics. To further expand and to take one step closer to have Unmanned Aerial Vehicles (UAVs) working in the cities, the European Union Aviation Safety Agency launched a project that introduces U-space airspace, an airspace where UAVs, for instance, are allowed to operate for commercial services.The problems defined for U-space airspace resemble problems defined in the area of multi-agent path finding, such as scaling and traffic etc., resulting an interest to research whether MAPF-solutions can be applied to U-space scenarios. The following thesis extends the state-of-the-art MAPF-algorithm Continuous-time Conflict based search (CCBS) to handle simplified U-space scenarios, as well as extend other A*-based algorithms, such as a version of the Receding Horizon Lattice-based Motion Planning named Extended Multi-agent A* algorithm with Wait-Time (EMAWT) and an extended A* named Extended Multi-agent A* algorithm (EMA) to handle them. Comparisons of the three algorithms resulted in the EMAWT being the most reliable and stable solution throughout all tests, whilst for fewer agents, the CCBS being the clear best solution.
|
23 |
A data structure for spanning tree optimization problems / Uma estrutura de dados para problemas de otimização de árvores geradorasBarbosa, Marco Aurélio Lopes 17 June 2019 (has links)
Spanning tree optimization problems are related to many practical applications. Several of these problems are NP-Hard, which limits the utility of exact methods and can require alternative approaches, like metaheuristics. A common issue for many metaheuristics is the data structure used to represent and manipulate the solutions. A data structure with efficient operations can expand the usefulness of a method by allowing larger instances to be solved in a reasonable amount of time. We propose the 2LETT data structure and uses it to represent spanning trees in two metaheuristics: mutation-based evolutionary algorithms and local search algorithms. The main operation of 2LETT is the exchange of one edge in the represented tree by another one, and it has O(√n) time, where n is the number of vertices in the tree. We conducent qualitative and quantitative evaluations for 2LETT and other structures in the literature. For the main operation of edge exchange in evolutionary algorithms, the computational experiments show that 2LETT has the best performance for trees with more than 10,000 vertices. For local search algorithms, 2LETT is the best option to deal with large trees with large diameters. / Os problemas de otimização de árvores geradoras estão relacionados a muitas aplicações práticas. Vários desses problemas são NP-difícies, o que limita a utilidade de métodos exatos e pode exigir abordagens alternativas, como metaheurísticas. Um questão relevante para muitas metaheurísticas é a estrutura de dados usada para representar e manipular as soluções. Uma estrutura de dados com operações eficientes pode aumentar a utilidade de um método, permitindo que instâncias maiores sejam resolvidas em um período de tempo razoável. Propomos a estrutura de dados 2LETT e a usamos para representar árvores geradoras em duas metaheurísticas: algoritmos evolutivos baseados em mutações e algoritmos de busca local. A operação principal da 2LETT é a troca de uma aresta na árvore representada por outra aresta. Esta operação tem tempo de O(√n), onde n é o número de vértices na árvore. Conduzimos avaliações qualitativas e quantitativas para 2LETT e outras estruturas na literatura. Para a principal operação de troca de arestas em algoritmos evolutivos, os experimentos computacionais mostram que a 2LETT possui o melhor desempenho para árvores com mais de 10.000 vértices. Para algoritmos de busca local, o 2LETT é a melhor opção para lidar com árvores grandes com grandes diâmetros.
|
24 |
Conexidade fuzzy relativa em grafos dirigidos e sua aplicação em um método híbrido para segmentação interativa de imagens / Relative fuzzy connectedness on directed graphs and its appication in a hybrid method for interactive image segmentationCcacyahuillca Bejar, Hans Harley 08 December 2015 (has links)
A segmentação de imagens consiste em dividir uma imagem em regiões ou objetos que a compõem, como, por exemplo, para isolar os pixels de um objeto alvo de uma dada aplicação. Em segmentação de imagens médicas, o objeto de interesse comumente apresenta transições em suas bordas predominantemente do tipo claro para escuro ou escuro para claro. Métodos tradicionais por região, como a conexidade fuzzy relativa (RFC - Relative Fuzzy Connectedness), não distinguem bem entre essas bordas similares com orientações opostas. A especificação da polaridade de contorno pode ajudar a amenizar esse problema, o que requer uma formulação matemática em grafos dirigidos. Uma discussão sobre como incorporar essa propriedade no arcabouço do RFC é apresentada neste trabalho. Uma prova teórica da otimalidade do novo algoritmo, chamado conexidade fuzzy relativa com orientação (ORFC - Oriented Relative Fuzzy Connectedness), em termos de uma função de energia em grafos dirigidos sujeita as restrições de sementes é apresentada, bem como a sua apli- cação em poderosos métodos híbridos de segmentação. O método híbrido proposto ORFC &Graph Cut preserva a robustez do ORFC em relação à escolha de sementes, evitando o problema do viés de encolhimento do método de Corte em Grafo (GC - Graph Cut), e mantém o forte controle do GC no delineamento de contornos de bordas irregulares da imagem. Os métodos propostos são avaliados usando imagens médicas de ressonáncia magnética (RM) e tomografia computadorizada (TC) do cérebro humano e de estudos torácicos. / Image segmentation consists of dividing an image into its composing regions or objects, for example, to isolate the pixels of a target object of a given application. In segmentation of medical images, the object of interest commonly presents transitions at its border predominantly from bright to dark or dark to bright. Traditional region-based methods of image segmentation, such as Relative Fuzzy Connectedness (RFC), do not distinguish well between similar boundaries with opposite orientations. The specification of the boundary polarity can help to alleviate this problem but this requires a mathematical formulation on directed graphs. A discussion on how to incorporate this property in the RFC framework is presented in this work. A theoretical proof of the optimality of the new algorithm, called Oriented Relative Fuzzy Connectedness (ORFC), in terms of an energy function on directed graphs subject to seed constraints is presented, and its application in powerful hybrid segmentation methods. The hybrid method proposed ORFC&Graph Cut preserves the robustness of ORFC respect to the seed choice, avoiding the shrinking problem of Graph Cut (GC), and keeps the strong control of the GC in the contour delination of irregular image boundaries. The proposed methods are evaluated using magnetic resonance medical imaging (MR) and computed tomography (CT) of the human brain and thoracic studies.
|
25 |
Otimização de funções contínuas usando algoritmos quânticos / Quantum continuous function optimization algorithmsLara, Pedro Carlos da Silva 22 April 2015 (has links)
Submitted by Maria Cristina (library@lncc.br) on 2015-09-23T18:31:34Z
No. of bitstreams: 1
tese_pedro.pdf: 954527 bytes, checksum: e9834fab8c799933912f185f0a422658 (MD5) / Approved for entry into archive by Maria Cristina (library@lncc.br) on 2015-09-23T18:31:58Z (GMT) No. of bitstreams: 1
tese_pedro.pdf: 954527 bytes, checksum: e9834fab8c799933912f185f0a422658 (MD5) / Made available in DSpace on 2015-09-23T18:32:21Z (GMT). No. of bitstreams: 1
tese_pedro.pdf: 954527 bytes, checksum: e9834fab8c799933912f185f0a422658 (MD5)
Previous issue date: 2015-04-22 / Conselho Nacional de Desenvolvimento Científico e Tecnológico / Optimization algorithms are known to have a wide range of applications in various areas of knowledge. Thus, any improvement in the performance of optimization algorithms generate great impact in solving various problems. Thus, this work indroduces the area of quantum algorithms for global optimization (maximization/minimization) of continuous functions through different quantum search methods and classical local optimization algorithms. In this case, the use of search quantum algorithms is tied directly to performance with respect to the classical method: using a quantum computer can find an element in an unsorted database using only $O(\sqrt{N})$ queries. / Algoritmos de otimização são conhecidos por apresentarem uma vasta gama de aplicações em diversas áreas do conhecimento. Desta forma, qualquer melhoria no desempenho dos algoritmos de otimização gera grande impacto na resolução de diversos problemas. Neste sentido, este trabalho introduz a área de algoritmos quânticos para a otimização global (maximização/minimização) de funções contínuas através de diferentes métodos quânticos de busca e algoritmos clássicos de otimização local. Neste caso, a utilização de algoritmos quânticos de busca está diretamente associada ao desempenho com relação ao método clássico: usando um computador quântico pode-se encontrar um elemento em um banco de dados não-ordenado usando apenas $O(\sqrt{N})$ consultas.
|
26 |
Supporting device-to-device search and sharing of hyper-localized dataMichel, Jonas Reinhardt 08 September 2015 (has links)
Supporting emerging mobile applications in densely populated environments requires connecting mobile users and their devices with the surrounding digital landscape. Specifically, the volume of digitally-available data in such computing spaces presents an imminent need for expressive mechanisms that enable humans and applications to share and search for relevant information within their digitally accessible physical surroundings. Device-to-device communications will play a critical role in facilitating transparent access to proximate digital resources. A wide variety of approaches exist that support device-to-device dissemination and query-driven data access. Very few, however, capitalize on the contextual history of the shared data itself to distribute additional data or to guide queries. This dissertation presents Gander, an application substrate and mobile middleware designed to ease the burden associated with creating applications that require support for sharing and searching of hyper-localized data in situ. Gander employs a novel trajectory-driven model of spatiotemporal provenance that enriches shared data with its contextual history -- annotations that capture data's geospatial and causal history across a lifetime of device-to-device propagation. We demonstrate the value of spatiotemporal data provenance as both a tool for improving ad hoc routing performance and for driving complex application behavior. This dissertation discusses the design and implementation of Gander's middleware model, which abstracts away tedious implementation details by enabling developers to write high-level rules that govern when, where, and how data is distributed and to execute expressive queries across proximate digital resources. We evaluate Gander within several simulated large-scale environments and one real-world deployment on the UT Austin campus. The goal of this research is to provide formal constructs realized within a software framework that ease the software engineering challenges encountered during the design and deployment of several applications in emerging mobile environments. / text
|
27 |
Aplicação de planejamento em jogos de estratégiaLuiz, Bruno Nepomuceno 08 September 2008 (has links)
With a digital game industry in constant ascendant and a more demanding consumer, designers and programmers have faced even bigger challenges to create realistic games. In this way, people are working a lot in the NPCs (Non Player Characters) intelligence. In this research, it was emphasized the use of planning to control the Artificial Intelligence of the NPCs. The work documented here was guided by the verification of two hypotheses: the viability of using a planner based on the A* algorithm and on a knowledge representation model similar to STRIPS system in strategic games, and the possibility of using this planner in a Multiagent System approach based on an auction system used to identify agents more appropriated to fulfill a goal. The term strategic games was chosen to avoid misunderstandings about the traditional RTS games. The planner and the Multiagent System have been implemented and several tests were made. The results were satisfactory and the hypotheses proved. / Com a indústria de jogos eletrônicos em constante crescimento e com um consumidor cada vez mais exigente, designers e programadores enfrentam, cada vez, maiores desafios para criar jogos realísticos. Nesse sentido, um tópico que vem sendo trabalhado bastante é a inteligência dos NPCs (Non Player Characters). Neste trabalho foi enfatizado o uso de técnicas de planejamento para controlar a Inteligência Artificial dos NPCs. A pesquisa documentada nesta dissertação foi guiada pela verificação de duas hipóteses: a viabilidade de utilização de um planejador baseado no algoritmo A* e em um modelo de representação de conhecimentos semelhante a do sistema STRIPS em jogos de estratégia; a possibilidade de uso desse planejador e de uma abordagem multiagente baseada em um sistema de leilões onde este último teria como função identificar o agente mais apto para cumprir uma determinada meta. O termo jogos de estratégia foi usado para não gerar confusão com relação aos jogos RTS tradicionais. O planejador e o sistema multiagente foram implementados e ínumeros testes realizados. Os resultados foram satisfatórios e as hipóteses comprovadas. / Mestre em Ciência da Computação
|
28 |
Automated Debugging and Bug Fixing Solutions : A Systematic Literature Review and Classification / Automated Debugging and Bug Fixing Solutions : A Systematic Literature Review and Classificationshafiq, Hafiz Adnan, Arshad, Zaki January 2013 (has links)
Context: Bug fixing is the process of ensuring correct source code and is done by developer. Automated debugging and bug fixing solutions minimize human intervention and hence minimize the chance of producing new bugs in the corrected program. Scope and Objectives: In this study we performed a detailed systematic literature review. The scope of work is to identify all those solutions that correct software automatically or semi-automatically. Solutions for automatic correction of software do not need human intervention while semi-automatic solutions facilitate a developer in fixing a bug. We aim to gather all such solutions to fix bugs in design, i.e., code, UML design, algorithms and software architecture. Automated detection, isolation and localization of bug are not in our scope. Moreover, we are only concerned with software bugs and excluding hardware and networking domains. Methods: A detailed systematic literature review (SLR) has been performed. A number of bibliographic sources are searched, including Inspec, IEEE Xplore, ACM digital library, Scopus, Springer Link and Google Scholar. Inclusion/exclusion, study quality assessment, data extraction and synthesis have been performed in depth according to guidelines provided for performing SLR. Grounded theory is used to analyze literature data. To check agreement level between two researchers, Kappa analysis is used. Results: Through SLR we identified 46 techniques. These techniques are classified in automated/semi-automated debugging and bug fixing. Strengths and weaknesses of each of them are identified, along with which types of bugs each can fix and in which language they can be implement. In the end, classification is performed which generate a list of approaches, techniques, tools, frameworks, methods and systems. Along, this classification and categorization we separated bug fixing and debugging on the bases of search algorithms. Conclusion: In conclusion achieved results are all automated/semi-automated debugging and bug fixing solutions that are available in literature. The strengths/benefits and weaknesses/limitations of these solutions are identified. We also recognize type of bugs that can be fixed using these solutions. And those programming languages in which these solutions can be implemented are discovered as well. In the end a detail classification is performed. / alla automatiska / halvautomatiska felsökning och felrättning lösningar som är tillgängliga i litteraturen. De styrkor / fördelar och svagheter / begränsningar av dessa lösningar identifieras. Vi erkänner också typ av fel som kan fastställas med hjälp av dessa lösningar. Och de programmeringsspråk där dessa lösningar kan genomföras upptäcks också. Till slut en detalj klassificering utförs / +46 763 23 93 87, +46 70 966 09 51
|
29 |
Conexidade fuzzy relativa em grafos dirigidos e sua aplicação em um método híbrido para segmentação interativa de imagens / Relative fuzzy connectedness on directed graphs and its appication in a hybrid method for interactive image segmentationHans Harley Ccacyahuillca Bejar 08 December 2015 (has links)
A segmentação de imagens consiste em dividir uma imagem em regiões ou objetos que a compõem, como, por exemplo, para isolar os pixels de um objeto alvo de uma dada aplicação. Em segmentação de imagens médicas, o objeto de interesse comumente apresenta transições em suas bordas predominantemente do tipo claro para escuro ou escuro para claro. Métodos tradicionais por região, como a conexidade fuzzy relativa (RFC - Relative Fuzzy Connectedness), não distinguem bem entre essas bordas similares com orientações opostas. A especificação da polaridade de contorno pode ajudar a amenizar esse problema, o que requer uma formulação matemática em grafos dirigidos. Uma discussão sobre como incorporar essa propriedade no arcabouço do RFC é apresentada neste trabalho. Uma prova teórica da otimalidade do novo algoritmo, chamado conexidade fuzzy relativa com orientação (ORFC - Oriented Relative Fuzzy Connectedness), em termos de uma função de energia em grafos dirigidos sujeita as restrições de sementes é apresentada, bem como a sua apli- cação em poderosos métodos híbridos de segmentação. O método híbrido proposto ORFC &Graph Cut preserva a robustez do ORFC em relação à escolha de sementes, evitando o problema do viés de encolhimento do método de Corte em Grafo (GC - Graph Cut), e mantém o forte controle do GC no delineamento de contornos de bordas irregulares da imagem. Os métodos propostos são avaliados usando imagens médicas de ressonáncia magnética (RM) e tomografia computadorizada (TC) do cérebro humano e de estudos torácicos. / Image segmentation consists of dividing an image into its composing regions or objects, for example, to isolate the pixels of a target object of a given application. In segmentation of medical images, the object of interest commonly presents transitions at its border predominantly from bright to dark or dark to bright. Traditional region-based methods of image segmentation, such as Relative Fuzzy Connectedness (RFC), do not distinguish well between similar boundaries with opposite orientations. The specification of the boundary polarity can help to alleviate this problem but this requires a mathematical formulation on directed graphs. A discussion on how to incorporate this property in the RFC framework is presented in this work. A theoretical proof of the optimality of the new algorithm, called Oriented Relative Fuzzy Connectedness (ORFC), in terms of an energy function on directed graphs subject to seed constraints is presented, and its application in powerful hybrid segmentation methods. The hybrid method proposed ORFC&Graph Cut preserves the robustness of ORFC respect to the seed choice, avoiding the shrinking problem of Graph Cut (GC), and keeps the strong control of the GC in the contour delination of irregular image boundaries. The proposed methods are evaluated using magnetic resonance medical imaging (MR) and computed tomography (CT) of the human brain and thoracic studies.
|
30 |
A Comparative Study on Optimization Algorithms and its efficiencyAhmed Sheik, Kareem January 2022 (has links)
Background: In computer science, optimization can be defined as finding the most cost-effective or notable achievable performance under certain circumstances, maximizing desired factors, and minimizing undesirable results. Many problems in the real world are continuous, and it isn't easy to find global solutions. However, computer technological development increases the speed of computations [1]. The optimization method, an efficient numerical simulator, and a realistic depiction of physical operations that we intend to describe and optimize for any optimization issue are all interconnected components of the optimization process [2]. Objectives: A literature review on existing optimization algorithms is performed. Ten different benchmark functions are considered and are implemented on the existing chosen algorithms like GA (Genetic Algorithm), ACO (Ant ColonyOptimization) Method, and Plant Intelligence Behaviour optimization algorithm to measure the efficiency of these approaches based on the factors or metrics like CPU Time, Optimality, Accuracy, and Mean Best Standard Deviation. Methods: In this research work, a mixed-method approach is used. A literature review is performed based on the existing optimization algorithms. On the other hand, an experiment is conducted by using ten different benchmark functions with the current optimization algorithms like PSO algorithm, ACO algorithm, GA, and PIBO to measure their efficiency based on the four different factors like CPU Time, Optimality, Accuracy, Mean Best Standard Deviation. This tells us which optimization algorithms perform better. Results: The experiment findings are represented within this section. Using the standard functions on the suggested method and other methods, the various metrics like CPU Time, Optimality, Accuracy, and Mean Best Standard Deviation are considered, and the results are tabulated. Graphs are made using the data obtained. Analysis and Discussion: The research questions are addressed based on the experiment's results that have been conducted. Conclusion: We finally conclude the research by analyzing the existing optimization methods and the algorithms' performance. The PIBO performs much better and can be depicted from the results of the optimal metrics, best mean, standard deviation, and accuracy, and has a significant drawback of CPU Time where its time taken is much higher when compared to the PSO algorithm and almost close to GA and performs much better than ACO algorithm.
|
Page generated in 0.065 seconds