• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 878
  • 201
  • 126
  • 110
  • 73
  • 25
  • 17
  • 16
  • 7
  • 6
  • 6
  • 5
  • 4
  • 4
  • 4
  • Tagged with
  • 1731
  • 413
  • 311
  • 246
  • 229
  • 184
  • 174
  • 168
  • 166
  • 156
  • 156
  • 152
  • 152
  • 150
  • 141
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
731

Model Agnostic Extreme Sub-pixel Visual Measurement and Optimal Characterization

January 2012 (has links)
abstract: It is possible in a properly controlled environment, such as industrial metrology, to make significant headway into the non-industrial constraints on image-based position measurement using the techniques of image registration and achieve repeatable feature measurements on the order of 0.3% of a pixel, or about an order of magnitude improvement on conventional real-world performance. These measurements are then used as inputs for a model optimal, model agnostic, smoothing for calibration of a laser scribe and online tracking of velocimeter using video input. Using appropriate smooth interpolation to increase effective sample density can reduce uncertainty and improve estimates. Use of the proper negative offset of the template function has the result of creating a convolution with higher local curvature than either template of target function which allows improved center-finding. Using the Akaike Information Criterion with a smoothing spline function it is possible to perform a model-optimal smooth on scalar measurements without knowing the underlying model and to determine the function describing the uncertainty in that optimal smooth. An example of empiric derivation of the parameters for a rudimentary Kalman Filter from this is then provided, and tested. Using the techniques of Exploratory Data Analysis and the "Formulize" genetic algorithm tool to convert the spline models into more accessible analytic forms resulted in stable, properly generalized, KF with performance and simplicity that exceeds "textbook" implementations thereof. Validation of the measurement includes that, in analytic case, it led to arbitrary precision in measurement of feature; in reasonable test case using the methods proposed, a reasonable and consistent maximum error of around 0.3% the length of a pixel was achieved and in practice using pixels that were 700nm in size feature position was located to within ± 2 nm. Robust applicability is demonstrated by the measurement of indicator position for a King model 2-32-G-042 rotameter. / Dissertation/Thesis / Measurement Results (part 1) / Measurement Results (part 2) / General Presentation / M.S. Mechanical Engineering 2012
732

[en] CDMA BLOCK TRANSMISSION IN SISO AND MISO CHANNELS / [pt] TRANSMISSÃO CDMA POR BLOCOS EM CANAIS SISO E MISO

CESAR AUGUSTO MEDINA SOTOMAYOR 05 October 2009 (has links)
[pt] Nesta tese é abordada a transmissão CDMA (Code Division Multiple Access) por blocos em canais SISO (Single Input - Single Output) seletivos em frequência. Considera-se a transmissão tanto em portadora única quanto multiportadora, com intervalo de guarda do tipo prefixo cíclico e do tipo preenchimento de zeros. São investigadas estruturas de detecção multiusuário às cegas, baseadas no critério de míınima variância com restrições lineares. Implementações adaptativas do tipo gradiente estocástico e do tipo míınimos quadrados são apresentadas e novos algoritmos de estimação de canal são propostos. É também discutida nesta tese a transmissão CDMA por blocos em canais MISO (Multiple Input - Single Output) seletivos em frequência. Considera-se, assim como no canal SISO, os casos de transmissão em portadora única e multiportadora, incorporando intervalos de guarda do tipo prefixo cíclico e do tipo preenchimento de zeros. Para este tipo de sistema, duas estruturas de transmissão são propostas e uma análise do ganho de diversidade para cada tipo de estrutura é conduzido, identificando as condiçõoes para atingir o máximo ganho de diversidade. Um detector baseado no critério de mínimo erro quadrático médio é implementado para cada estrutura e, no caso da primeira estrutura de transmissão, um detector às cegas baseado no critério de mínima variância é proposto. Uma implementação adaptativa do tipo mínimos quadrados é apresentada e novos algoritmos de estimação de canal são propostos. / [en] This thesis addresses block CDMA (Code Division Multiple Access) transmission in frequency selective SISO (Single Input - Single Output) channels. Both multicarrier and single carrier transmission are considered with cyclic prefix and zero padding as guard interval. Blind multiuser detection based on the linearly constrained minimum variance criterion is investigated. Stochastic gradient and recursive least squares implementations are presented and new channel estimation algorithms are proposed. It is also discussed in this thesis block CDMA transmission in frequency selective MISO (Multiple Input - Single Output) channels, including, as in the SISO channel, the case of multicarrier and single carrier transmission with cyclic prefix and zero padding as guard interval. Two structures are proposed for transmission in this scenario, an analysis of the diversity gain for each type of structure is conducted and conditions for achieving the maximum diversity gain are identified. A detector based on the minimum mean square error criterion is implemented for each structure. Recursive least squares implementations are presented and new blind channel estimation algorithms are proposed.
733

Interactive out-of-core rendering and filtering of one billion stars measured by the ESA Gaia mission

Alsegård, Adam January 2018 (has links)
The purpose of this thesis was to visualize the 1.7 billion stars released by the European Space Agency, as the second data release (DR2) of their Gaia mission, in the open source software OpenSpace with interactive framerates and also to be able to filter the data in real-time. An additional implementation goal was to streamline the data pipeline so that astronomers could use OpenSpace as a visualization tool in their research. An out-of-core rendering technique has been implemented where the data is streamed from disk during runtime. To be able to stream the data it first has to be read, sorted into an octree structure and then stored as binary files in a preprocess. The results of this report show that the entire DR2 dataset can be read from multiple files in a folder and stored as binary values in about seven hours. This step determines what values the user will be able to filter by and only has to be done once for a specific dataset. Then an octree can be created in about 5 to 60 minutes where the user can define if the stars should be filtered by any of the previously stored values. Only values used in the rendering will be stored in the octree. If the created octree can fit in the computer’s working memory then the entire octree will be loaded asynchronously on start-up otherwise only a binary file with the structure of the octree will be read during start-up while the actual star data will be streamed from disk during runtime. When the data have been loaded it is streamed to the GPU. Only stars that are visible are uploaded and the application also keeps track of which nodes that already have been uploaded to eliminate redundant updates. The inner nodes of the octree store the brightest stars in all its descendants as a level-of-detail cache that can be used when the nodes are small enough in screen space. The previous star rendering in OpenSpace has been improved by dividing the rendering phase into two passes. The first pass renders into a framebuffer object while the second pass then performs a tonemapping of the values. The rendering can be done either with billboard instancing or point splatting. The latter is generally the faster alternative. The user can also switch between using VBOs or SSBOs when updating the buffers. The latter is faster but requires OpenGL 4.3, which Apple products do not currently support. The rendering runs with interactive framerates for both flat and curved screen, such as domes/planetariums. The user can also switch dataset during render as well as render technique, buffer objects, color settings and many other properties. It is also possible to turn time on and see the stars move with their calculated space velocity, or transverse velocity if the star lacks radial velocity measurements. The calculations omits the gravitational rotation. The purpose of the thesis has been fulfilled as it is possible to fly through the entire DR2 dataset on a moderate desktop computer and filter the data in real-time. However, the main contribution of the project may be that the ground work has been laid in OpenSpace for astronomers to actually use it as a tool when visualizing their own datasets and also for continuing to explore the coming Gaia releases.
734

Sécurité des systèmes industriels : filtrage applicatif et recherche de scénarios d'attaques / Cybersecurity of Industrial Systems : Applicative Filtering and Generation of Attack Scenarios

Puys, Maxime 05 February 2018 (has links)
Les systèmes industriels, souvent appelés SCADA (pour Système d’acquisition et decontrôle de données) sont la cible d’attaques informatiques depuis Stuxnet en 2010.Dû à la criticité de leurs interactions avec le monde réel, ils peuvent représenter unemenace pour l’environnement et les humains. Comme ces systèmes ont par le passé étéphysiquement isolés du reste du monde, ils ont été majoritairement protégés contre despannes et des erreurs (ce qu’on appelle la sûreté). La sécurité informatique diffère de lasûreté dans le sens où un attaquant cherchera activement à mettre en défaut le systèmeet gagnera en puissance au cours du temps. L’un des challenges dans le cadre de lasécurité des systèmes industriels est de faire cohabiter des propriétés de sécurité avecles contraintes métier du système. Nous répondons à cette question par trois axes derecherche.Tout d’abord, nous proposons un filtre dédié aux communications des systèmes industriels,permettant d’exprimer des propriétés au niveau applicatif. Ensuite, nous nousintéressons à la vérification de protocoles cryptographiques appliquée à des protocolesindustriels comme MODBUS ou OPC-UA. À l’aide d’outils classiques du domaine, nousmodélisons les protocoles afin de vérifier s’ils garantissent des propriété de confidentialité,d’authentification et d’intégrité. Enfin, nous proposons une approche, nomméeASPICS (pour Applicative Attack Scenarios Production for Industrial Control Systems),permettant de vérifier si des propriétés de sûreté (similaires à celles vérifiées par le filtre)peuvent être mises en défaut par des attaquants en fonction de leur position et de leurcapacité. Nous implémentons cette analyse dans le model-checker UPPAAL et l’appliquons sur un exemple. / Industrial systems, also called SCADA (for Supervisory Control And Data Acquisition),are targeted by cyberattacks since Stuxnet in 2010. Due to the criticality of theirinteraction with the real world, these systems can be really harmful for humans andenvironment. As industrial systems have historically been physically isolated from therest of the world, they focused on the protection against outages and human mistakes(also called safety). Cybersecurity differs from safety in the way that an adversary iswilling to harm the system and will learn from his mistakes. One of the difficulty interms of cybersecurity of industrial systems is to make coexist security properties withdomain specific constraints. We tackle this question with three main axes.First, we propose a filter dedicated to industrial communications, allowing to enforceapplicative properties. Then, we focus on formal verification of cryptographic protocolsapplied to industrial protocols such as MODBUS or OPC-UA. Using well-known toolsfrom the domain, we model the protocols in order to check if they provide securityproperties including confidentiality, authentication and integrity. Finally, we propose anapproach named ASPICS (for Applicative Attack Scenarios Production for IndustrialControl Systems) to study if safety properties (similar to those verified by our filter)can actually be jeopardized by attackers depending on their position and capacity. Weimplement this approach in the UPPAAL model-checker and study its results on aproof-of-concept example.
735

[en] COMPARISON OF TECHNIQUES FOR CFAR CLEAN AND ANALYSIS OF DISPERSION PARAMETERS OF MOBILE RADIO CHANNEL IN THE 2.5 GHZ / [pt] COMPARAÇÃO DAS TÉCNICAS CLEAN E CFAR PARA A ANÁLISE DOS PARÂMETROS DE DISPERSÃO DO CANAL RÁDIO MÓVEL NA FAIXA DE 2.5 GHZ

ISAAC NEWTON FERREIRA SANTA RITA 18 October 2018 (has links)
[pt] Este trabalho objetiva apresentar os resultados das medições e a análise da resposta do canal banda larga na faixa de frequências de 2.5 GHz em um ambiente urbano, através da técnica de sondagem de multiportadoras. Para isso, os perfis de retardo de potência desse canal foram obtidos com base nos dados medidos na região da Gávea na cidade do Rio de Janeiro, utilizando duas técnicas de limpeza de perfis de retardo. As técnicas de limpeza são apresentadas e seus resultados são comparados para a transmissão de um sinal de 20MHz de largura de banda. Os Retardos RMS (Root Mean Square) são calculados a partir desses Perfis de Retardo de Potências filtrados e o erro médio quadrático para cada técnica de limpeza é avaliado e comparado para algumas posições do receptor. / [en] This work presents the results of measurements and the analysis of the response of a wide band channel in the 2.5 GHz band for an urban environment, using the multicarrier sounding technique. To do this, the power delay profile (PDP) of the channel was obtained based on data measured at the neighborhood of Gávea, in the Rio de Janeiro, using two power delay profile filtering techniques. The power delay profile filtering techniques are presented and the results are compared for a transmitted signal of 20MHz bandwidth. The RMS (root mean square) delay spreads are determined from the filtered PDPs and from the original ones. The results are compared for some positions of the receiver and the quadratic mean error is evaluated.
736

Uma metodologia para computação com DNA / A DNA computing methodology

Isaia Filho, Eduardo January 2004 (has links)
A computação com DNA é um campo da Bioinformática que, através da manipulação de seqüências de DNA, busca a solução de problemas. Em 1994, o matemático Leonard Adleman, utilizando operações biológicas e manipulação de seqüências de DNA, solucionou uma instância de um problema intratável pela computação convencional, estabelecendo assim, o início da computação com DNA. Desde então, uma série de problemas combinatoriais vem sendo solucionada através deste modelo de programação. Este trabalho analisa a computação com DNA, com o objetivo de traçar algumas linhas básicas para quem deseja programar nesse ambiente. Para isso, são apresentadas algumas vantagens e desvantagens da computação com DNA e, também, alguns de seus métodos de programação encontrados na literatura. Dentre os métodos estudados, o método de filtragem parece ser o mais promissor e, por isso, uma metodologia de programação, através deste método, é estabelecida. Para ilustrar o método de Filtragem Seqüencial, são mostrados alguns exemplos de problemas solucionados a partir deste método. / DNA computing is a field of Bioinformatics that, through the manipulation of DNA sequences, looks for the solution of problems. In 1994 the mathematician Leonard Adleman, using biological operations and DNA sequences manipulation, solved an instance of a problem considered as intractable by the conventional computation, thus establishing the beginning of the DNA computing. Since then, a series of combinatorial problems were solved through this model of programming. This work studies the DNA computing, aiming to present some basic guide lines for those people interested in this field. Advantages and disadvantages of the DNA computing are contrasted and some methods of programming found in literature are presented. Amongst the studied methods, the filtering method appears to be the most promising and for this reason it was chosen to establish a programming methodology. To illustrate the sequential filtering method, some examples of problems solved by this method are shown.
737

Definição de tecnologias para desaguamento de ultrafinos ricos de minério de ferro : uma aplicação na Vale Carajás - Pará - Brasil

Orsine, Noeber Maciel January 2014 (has links)
O minério de ferro produzido no complexo de Carajás traz consigo características mineralógicas que conferem um elevadíssimo teor de Fe em todas as suas frações granulométricas. Dessa forma, ao final da cadeia produtiva, tanto os produtos comerciais mais grosseiros quanto os finos contem cerca de 62% de Fe contido na sua composição. Importante destacar que em Carajás as fases no processamento desse minério são apenas para cominuição e classificação por tamanho. Não existem etapas de concentração e os produtos são diferenciados por suas respectivas curvas granulométricas: o “granulado” - mais grosseiro (> 13 mm), o “Sinter-Feed” (< 13 mm e > 0,5 mm) e o “pellet feed” (< 0,15 mm). Garantir a correta distribuição granulométrica da matéria prima é uma premissa de mercado. A ultima etapa da classificação é feita através de hidrociclones e gera um overflow ultrafino de altíssima superfície específica maior que 6.500 Blaine e com 45% até 95 % < 7 μm. E ainda possui um elevado teor de Fe - cerca de 62 %. Dessa forma os objetivos gerais dessa pesquisa buscaram a solução para essa oportunidade de recuperar e vender esses rejeitos. O estudo sugeriu através de ensaios com tecnologias capazes de desaguar os ultrafinos gerados para 9,00 % de umidade, que é o valor que permite a movimentação e o manuseio desses rejeitos, além de permitir sua incorporação na blendagem de produtos mais grossos. Desse modo, foram realizados experimentos em diversos laboratórios externos e em escala piloto na Usina de Carajás com amostras dos dois rejeitos ultrafinos das duas fases de hidrociclonagem: o natural e o moído. Ficou evidente que o equipamento tem de combinar necessariamente e de forma eficiente dois fatores essências ao desaguamento: elevadíssimas pressões e altas temperatura na operação desses ultrafinos. A produtividade atingida foi da ordem de 50 t/h x m² para o rejeito da hidrociclonagem do Sinter Feed e 40 t/h x m² para o rejeito dos hidrociclones da Moagem. / The Iron ore that is produced in Carajás mining complex brings mineralogical characteristics that give a very high Fe content in all its size fractions. Thus, at the end of the production chain both coarser and fine contains about 62% Fe contained in its composition. Importantly, in Carajás stages in the processing of this ore are for reduction and classification by size. There is thus no concentration steps and products are differentiated by their respective size distribution curves: the "grain" - coarser (> 13 mm), the "Sinter-Feed" (<13 mm and > 0.5 mm) and the "pellet feed" (<0.15 mm). Ensure proper particle size distribution of the raw material is a market premise. The last step of classification is made using hydrocyclones and generates an overflow ultrafine high specific surface area greater than 6.500 Blaine and with 45% to 95% <7 μm. And has a high Fe content - about 62%. The overall objectives of this research sought the solution to this opportunity to recover and sell these “tailings”. The study suggested by testing with technologies capable of flowing into the ultrathin generated to 9.00% of moisture, which is the value that allows movement and handling these wastes, and allows their incorporation into the blending of thicker products. Thus, experiments were carried out in several external laboratories and pilot-scale plant in the Carajás with samples of both ultrafine “tailings” of the two phases of hydrocycloning: the natural and the ground. It was evident that the equipment must necessarily match and two efficiently factors essences to dewatering: very high pressure and high temperature operation of these “tailings”. The productivity achieved was around 50 t/h x m² in reject of the Sinter Feed hydrocycloning and 40 t/h x m² to reject of the grinding hydrocyclones.
738

EFFICIENT LEARNING-BASED RECOMMENDATION ALGORITHMS FOR TOP-N TASKS AND TOP-N WORKERS IN LARGE-SCALE CROWDSOURCING SYSTEMS

Safran, Mejdl Sultan 01 May 2018 (has links)
A pressing need for efficient personalized recommendations has emerged in crowdsourcing systems. On the one hand, workers confront a flood of tasks, and they often spend too much time to find tasks matching their skills and interests. Thus, workers want effective recommendation of the most suitable tasks with regard to their skills and preferences. On the other hand, requesters sometimes receive results in low-quality completion since a less qualified worker may start working on a task before a better-skilled worker may get hands on. Thus, requesters want reliable recommendation of the best workers for their tasks in terms of workers' qualifications and accountability. The task and worker recommendation problems in crowdsourcing systems have brought up unique characteristics that are not present in traditional recommendation scenarios, i.e., the huge flow of tasks with short lifespans, the importance of workers' capabilities, and the quality of the completed tasks. These unique features make traditional recommendation approaches (mostly developed for e-commerce markets) no longer satisfactory for task and worker recommendation in crowdsourcing systems. In this research, we reveal our insight into the essential difference between the tasks in crowdsourcing systems and the products/items in e-commerce markets, and the difference between buyers' interests in products/items and workers' interests in tasks. Our insight inspires us to bring up categories as a key mediation mechanism between workers and tasks. We propose a two-tier data representation scheme (defining a worker-category suitability score and a worker-task attractiveness score) to support personalized task and worker recommendation. We also extend two optimization methods, namely least mean square error (LMS) and Bayesian personalized rank (BPR) in order to better fit the characteristics of task/worker recommendation in crowdsourcing systems. We then integrate the proposed representation scheme and the extended optimization methods along with the two adapted popular learning models, i.e., matrix factorization and kNN, and result in two lines of top-N recommendation algorithms for crowdsourcing systems: (1) Top-N-Tasks (TNT) recommendation algorithms for discovering the top-N most suitable tasks for a given worker, and (2) Top-N-Workers (TNW) recommendation algorithms for identifying the top-N best workers for a task requester. An extensive experimental study is conducted that validates the effectiveness and efficiency of a broad spectrum of algorithms, accompanied by our analysis and the insights gained.
739

Échantillonnage Non Uniforme : Application aux filtrages et aux conversions CAN/CNA (Convertisseurs Analogique-Numérique et Numérique/Analogique) dans les télécommunications par satellite / Non Uniform Sampling : Application to filtering and ADC/DAC conversions (Analog-to-Digital and Digital-to-Analog) in the telecommunications by satellite

Vernhes, Jean-Adrien 25 January 2016 (has links)
La théorie de l'échantillonnage uniforme des signaux, développée en particulier par C. Shannon, est à l'origine du traitement numérique du signal. Depuis, de nombreux travaux ont été consacrés à l'échantillonnage non uniforme. Celui-ci permet, d'une part, de modéliser les imperfections des dispositifs d'échantillonnage uniforme. D'autre part, l'échantillonnage peut être effectué de manière délibérément non uniforme afin de bénéficier de propriétés particulières, notamment un assouplissement des conditions portant sur le choix de la fréquence moyenne d'échantillonnage. La plupart de ces travaux reste dans un cadre théorique en adoptant des schémas d'échantillonnage et des modèles de signaux simplifiés. Or, actuellement, dans de nombreux domaines d'application, tels que les communications par satellites, la conversion analogique-numérique s'effectue sous des contraintes fortes pour les largeurs de bande mises en jeu, en raison notamment des fréquences très élevées utilisées. Ces conditions opérationnelles accentuent les imperfections des dispositifs électroniques réalisant l'échantillonnage et induisent le choix de modèles de signaux et de schémas d'échantillonnage spécifiques. Cette thèse a pour objectif général d'identifier des modèles d'échantillonnage adaptés à ce cadre applicatif. Ceux-ci s'appliquent à des signaux aléatoires passe-bande, qui constituent un modèle classique en télécommunications. Ils doivent prendre en compte des facteurs technologiques, économiques ainsi que des contraintes bord de complexité et éventuellement intégrer des fonctionnalités propres aux télécommunications. La première contribution de cette thèse est de développer des formules d'échantillonnage non uniforme qui intègrent dans le domaine numérique des fonctionnalités délicates à implémenter dans le domaine analogique aux fréquences considérées. La deuxième contribution consiste à caractériser et à compenser les erreurs de synchronisation de dispositifs d'échantillonnage non uniforme particuliers, à savoir les convertisseurs analogique-numérique entrelacés temporellement, via des méthodes supervisées ou aveugles. / The theory of uniform sampling, developed among others by C. Shannon, is the foundation of today digital signal processing. Since then, numerous works have been dedicated to non uniform sampling schemes. On the one hand, these schemes model uniform sampling device imperfections. On the other hand, sampling can be intentionally performed in a non uniform way to benefit from specific properties, in particular simplifications concerning the choice of the mean sampling frequency. Most of these works have focused on theoretical approaches, adopting simplified models for signals and sampling devices. However, in many application domains, such as satellite communications, analog-to-digital conversion is submitted to strong constraints over the involved bandwidth due to the very high frequencies used. These operational conditions enhance the imperfections of the involved electronic devices and require the choice of particular signal models and sampling schemes. This thesis aims at proposing sampling models suitable for this context. These models apply to random band-pass signals, which are the classical models for telecommunication signals. They must take into account technological, economical factors and on-board complexity constraints and allow to integrate particular functionalities useful for telecommunication applications. This thesis first contribution is to develop non uniform sampling formulas that can digitally integrate functionalities that appear to be tricky in the analog domain at the considered frequencies. The thesis second contribution consists in applying non uniform sampling theory to the estimation and compensation of synchronization errors encountered in particular sampling devices, the timeinterleaved analog-to-digital converters. This estimation will be performed through supervised or blind methods.
740

Color wideline detector and local width estimation / Um detector de linhas largas para imagens coloridas e estimativa local de largura de linha

Jorge, Vitor Augusto Machado January 2012 (has links)
Algoritmos de detecção de linhas são usados em muitos campos de aplicação, tais como visão computacional e automação como base para análises mais complexas. Por exemplo, a informação de linha pode ser usada como dado de entrada para algoritmos de detecção de objetos ou mesmo para a estimativa da orientação espacial de robôs aéreos. Uma das formas de detectar linhas é através do uso de um processo de filtragem não linear chamado deWide Line Detector (WLD). Esse algoritmo é eficaz na marcação de pixels de linha em imagens em tons de cinza, separando linhas claras ou linhas escuras. Contudo, os algoritmos de detecção de linha não estão normalmente preocupados com a estimativa de largura local individual associada a um pixel. Se disponível, tal informação poderia ser explorada por algoritmos de visão computacional. Além do mais, a informação de cor também é extensivamente usada em visão computacional como um discriminante de objetos, mas o WLD não a usa. Neste Trabalho, nós propusemos a extensão do WLD para imagens em cores. Nós também desenvolvemos um novo kernel monotonicamente crescente que é mais eficiente e mais robusto para detectar linhas do que que os kernels monotonicamente decrescentes usados pelo WLD. Por fim, desenvolvemos uma maneira de obter uma estimativa de largura de linha partindo da densidade local associada a similaridade entre pixels, revertendo o processo usado pelo WLD para estimar qual kernel deve ser usado. Diversos experimentos foram realizados com o método proposto considerando diferentes parâmetros, além da comparação com o WLD tradicional, para analizar a eficácia do método. / Line detection algorithms are used by many application fields, such as computer vision and automation, as a basis for more complex analysis. For instance, line information can be used as input to object detection algorithms or even attitude estimation in flying robots. One way to detect lines is to use an isotropic nonlinear filtering procedure called the Wide Line Detector (WLD). This algorithm is effective to highlight the line pixels in gray scale images, separating dark or bright lines. However, line detection algorithms are not normally concerned with the pixel-wise estimation of thickness. If available, such information could be further explored by computer vision algorithms. Furthermore, color is extensively used in computer vision as an object discriminant, but not by the WLD. In this work, we propose the extension of the WLD to color images. We also develop a method that allows the estimation of the line width locally using only the density information and no border or center line information. Finally, we develop a new monotonically increasing kernel that is more efficient and yet effective to detect lines than the monotonically decreasing kernels used by the WLD. Finally, we devise a way ro obtain the wideline thickness from the density estimate obtained from the similarity between pixels, reverting the process used by the WLD to determine which kernel should be used. We perform several experiments with the proposed method, considering different parameters, and comparing it to the traditional WLD algorithm to assess the effectiveness of the method.

Page generated in 0.0627 seconds