• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 25
  • 7
  • 6
  • 3
  • 1
  • 1
  • 1
  • Tagged with
  • 55
  • 55
  • 21
  • 15
  • 7
  • 7
  • 7
  • 7
  • 7
  • 6
  • 6
  • 5
  • 5
  • 5
  • 5
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Multi-objective ROC learning for classification

Clark, Andrew Robert James January 2011 (has links)
Receiver operating characteristic (ROC) curves are widely used for evaluating classifier performance, having been applied to e.g. signal detection, medical diagnostics and safety critical systems. They allow examination of the trade-offs between true and false positive rates as misclassification costs are varied. Examination of the resulting graphs and calcu- lation of the area under the ROC curve (AUC) allows assessment of how well a classifier is able to separate two classes and allows selection of an operating point with full knowledge of the available trade-offs. In this thesis a multi-objective evolutionary algorithm (MOEA) is used to find clas- sifiers whose ROC graph locations are Pareto optimal. The Relevance Vector Machine (RVM) is a state-of-the-art classifier that produces sparse Bayesian models, but is unfor- tunately prone to overfitting. Using the MOEA, hyper-parameters for RVM classifiers are set, optimising them not only in terms of true and false positive rates but also a novel measure of RVM complexity, thus encouraging sparseness, and producing approximations to the Pareto front. Several methods for regularising the RVM during the MOEA train- ing process are examined and their performance evaluated on a number of benchmark datasets demonstrating they possess the capability to avoid overfitting whilst producing performance equivalent to that of the maximum likelihood trained RVM. A common task in bioinformatics is to identify genes associated with various genetic conditions by finding those genes useful for classifying a condition against a baseline. Typ- ically, datasets contain large numbers of gene expressions measured in relatively few sub- jects. As a result of the high dimensionality and sparsity of examples, it can be very easy to find classifiers with near perfect training accuracies but which have poor generalisation capability. Additionally, depending on the condition and treatment involved, evaluation over a range of costs will often be desirable. An MOEA is used to identify genes for clas- sification by simultaneously maximising the area under the ROC curve whilst minimising model complexity. This method is illustrated on a number of well-studied datasets and ap- plied to a recent bioinformatics database resulting from the current InChianti population study. Many classifiers produce “hard”, non-probabilistic classifications and are trained to find a single set of parameters, whose values are inevitably uncertain due to limited available training data. In a Bayesian framework it is possible to ameliorate the effects of this parameter uncertainty by averaging over classifiers weighted by their posterior probabil- ity. Unfortunately, the required posterior probability is not readily computed for hard classifiers. In this thesis an Approximate Bayesian Computation Markov Chain Monte Carlo algorithm is used to sample model parameters for a hard classifier using the AUC as a measure of performance. The ability to produce ROC curves close to the Bayes op- timal ROC curve is demonstrated on a synthetic dataset. Due to the large numbers of sampled parametrisations, averaging over them when rapid classification is needed may be impractical and thus methods for producing sparse weightings are investigated.
22

Compression progressive de maillages surfaciques texturés / Porgressive compression of surface textured meshes

Caillaud, Florian 17 January 2017 (has links)
Depuis plusieurs années, les modèles 3D deviennent de plus en plus détaillés. Cela augmente considérablement le volume de données les décrivant. Cependant, dans un même temps, un nombre croissant d’applications sont contraintes en mémoire et/ou en vitesse (visualisation sur périphériques mobiles, jeux vidéos, etc.). Dans un contexte Web, ces difficultés sont encore plus présentes. Cette situation peut entraîner des incompatibilités, des latences de transmission ou d’affichage qui sont souvent problématiques. La compression progressive de ces modèles est une des solutions envisageables. Le but étant de compresser les informations (géométrie, connectivité et attributs associés) de façon à pouvoir reconstruire progressivement le maillage. À la différence d’une compression dite single-rate, la compression progressive propose très rapidement un aperçu fidèle du modèle 3D pour ensuite le raffiner jusqu’à retrouver le maillage complet. Ceci permet un meilleur confort pour l’utilisateur et une adaptation de la quantité d’éléments à visualiser ou à traiter en fonction des capacités du périphérique de réception. Généralement, les approches existantes pour la compression progressive se focalisent sur le traitement de maillages 2-variétés triangulaires. Très peu de méthodes sont capables de compresser progressivement des maillages surfaciques non-variétés et, à notre connaissance, aucune ne permet de compresser génériquement un maillage surfacique quel que soit son type (i.e. non-variété et polygonal). Pour supprimer ces limitations, nous présentons une méthode de compression progressive générique permettant de traiter l’ensemble des maillages surfaciques (non-variétés et polygonaux). De plus, notre approche tient compte de l’attribut de texture potentiellement associé à ces maillages, en gérant correctement les coutures éventuelles. Pour ce faire, nous décimons progressivement le maillage à l’aide d’un nouvel opérateur générique de simplification. Cette décimation est guidée par une métrique locale qui a pour but de préserver la géométrie et la paramétrisation de la texture. Durant cette simplification, nous encodons progressivement les informations nécessaires à la reconstruction. Afin d’améliorer le taux de compression, nous mettons en oeuvre certains procédés de réduction de l’entropie, ainsi que des dispositifs de prédiction basés sur la géométrie pour l’encodage de la connectivité et des coordonnées de texture. Pour finir, l’image de texture est compressée progressivement puis multiplexée avec les données relatives au maillage. Ce multiplexage est réalisé à l’aide d’une métrique perceptuelle afin d’obtenir le meilleur rapport débit-distorsion possible lors de la décompression. / Since several years, 3D models become more and more detailed. This increases substantially the amount of data needed to describe them. However, in the same time, a rising number of applications are constrained in memory and/or in speed (mobile device visualization, video games, etc.). These difficulties are even more visible within a Web context. This situation can lead to incompatibilities, latency in transmission or rendering, which is generally an issue. The progressive compression of these models is a possible solution. The goal is to compress the information (geometry, connectivity and associated attributes) in order to allow a progressive reconstruction of the mesh. Contrary to a single-rate compression, progressive compression quickly proposes a faithful draft of the 3D model and, then, refines it until the complete mesh is recovered. This allows a better comfort for the user and a real adaptation of the rendered element number in adequacy with the terminal device properties. The existing approaches for progressive compression mainly focus on triangular 2- manifold meshes. Very few methods are able to compress progressively non-manifold surface meshes and, to our knowledge, none can deal with every surface meshes (i.e. nomanifold and polygonal), in a generic way. So as to suppress these limitations, we present a new generic progressive method allowing the compression of polygonal non-manifold surface meshes. Moreover, our approach takes care of the texture attribute, possibly associated to these meshes, by handling properly potential texture seams. For that purpose, we progressively decimate the mesh using a new generic simplification operator. This decimation is driven by a local metric which aims to preserve both the geometry and the texture parametrisation. During the simplification, we progressively encode the necessary information for the reconstruction. In order to improve the compression rate, we propose several entropy reduction mechanisms, as well as geometry based prediction strategies for the connectivity and UV coordinates encoding. Finally, the texture map is progressively compressed then multiplexed with mesh data. This multiplexing is achieved using a perceptual metric to provide the best rate-distortion ratio as possible during the decompression.
23

Out-of-Core Multi-Resolution Volume Rendering of Large Data Sets

Lundell, Fredrik January 2011 (has links)
A modality device can today capture high resolution volumetric data sets and as the data resolutions increase so does the challenges of processing volumetric data through a visualization pipeline. Standard volume rendering pipelines often use a graphic processing unit (GPU) to accelerate rendering performance by taking beneficial use of the parallel architecture on such devices. Unfortunately, graphics cards have limited amounts of video memory (VRAM), causing a bottleneck in a standard pipeline. Multi-resolution techniques can be used to efficiently modify the rendering pipeline, allowing a sub-domain within the volume to be represented at different resolutions. The active resolution distribution is temporarily stored on the VRAM for rendering and the inactive parts are stored on secondary memory layers such as the system RAM or on disk. The active resolution set can be optimized to produce high quality renders while minimizing the amount of storage required. This is done by using a dynamic compression scheme which optimize the visual quality by evaluating user-input data. The optimized resolution of each sub-domain is then, on demand, streamed to the VRAM from secondary memory layers. Rendering a multi-resolution data set requires some extra care between boundaries of sub-domains. To avoid artifacts, an intrablock interpolation (II) sampling scheme capable of creating smooth transitions between sub-domains at arbitrary resolutions can be used. The result is a highly optimized rendering pipeline complemented with a preprocessing pipeline together capable of rendering large volumetric data sets in real-time.
24

Hierarchical Path Planning and Control of a Small Fixed-wing UAV: Theory and Experimental Validation

Jung, Dongwon Jung 14 November 2007 (has links)
Recently there has been a tremendous growth of research emphasizing control of unmanned aerial vehicles (UAVs) either in isolation or in teams. As a matter of fact, UAVs increasingly find their way to applications, especially in military and law enforcement (e.g., reconnaissance, remote delivery of urgent equipment/material, resource assessment, environmental monitoring, battlefield monitoring, ordnance delivery, etc.). This trend will continue in the future, as UAVs are poised to replace the human-in-the-loop during dangerous missions. Civilian applications of UAVs are also envisioned such as crop dusting, geological surveying, search and rescue operations, etc. In this thesis we propose a new online multiresolution path planning algorithm for a small UAV with limited on-board computational resources. The proposed approach assumes that the UAV has detailed information of the environment and the obstacles only in its vicinity. Information about far-away obstacles is also available, albeit less accurately. The proposed algorithm uses the fast lifting wavelet transform (FLWT) to get a multiresolution cell decomposition of the environment, whose dimension is commensurate to the on-board computational resources. A topological graph representation of the multiresolution cell decomposition is constructed efficiently, directly from the approximation and detail wavelet coefficients. Dynamic path planning is sequentially executed for an optimal path using the A* algorithm over the resulting graph. The proposed path planning algorithm is implemented on-line on a small autopilot. Comparisons with the standard D*-lite algorithm are also presented. We also investigate the problem of generating a smooth, planar reference path from a discrete optimal path. Upon the optimal path being represented as a sequence of cells in square geometry, we derive a smooth B-spline path that is constrained inside a channel that is induced by the geometry of the cells. To this end, a constrained optimization problem is formulated by setting up geometric linear constraints as well as boundary conditions. Subsequently, we construct B-spline path templates by solving a set of distinct optimization problems. For an application to the UAV motion planning, the path templates are incorporated to replace parts of the entire path by the smooth B-spline paths. Each path segment is stitched together while preserving continuity to obtain a final smooth reference path to be used for path following control. The path following control for a small fixed-wing UAV to track the prescribed smooth reference path is also addressed. Assuming the UAV is equipped with an autopilot for low level control, we adopt a kinematic error model with respect to the moving Serret-Frenet frame attached to a path for tracking controller design. A kinematic path following control law that commands heading rate is presented. Backstepping is applied to derive the roll angle command by taking into account the approximate closed-loop roll dynamics. A parameter adaptation technique is employed to account for the inaccurate time constant of the closed-loop roll dynamics during actual implementation. Finally, we implement the proposed hierarchical path control of a small UAV on the actual hardware platform, which is based on an 1/5 scale R/C model airframe (Decathlon) and the autopilot hardware and software. Based on the hardware-in-the-loop (HIL) simulation environment, the proposed hierarchical path control algorithm has been validated through the on-line, real-time implementation on a small micro-controller. By a seamless integration of the control algorithms for path planning, path smoothing, and path following, it has been demonstrated that the UAV equipped with a small autopilot having limited computational resources manages to accomplish the path control objective to reach the goal while avoiding obstacles with minimal human intervention.
25

Algorithm-Based Efficient Approaches for Motion Estimation Systems

Lee, Teahyung 14 November 2007 (has links)
Algorithm-Based Efficient Approaches for Motion Estimation Systems Teahyung Lee 121 pages Directed by Dr. David V. Anderson This research addresses algorithms for efficient motion estimation systems. With the growth of wireless video system market, such as mobile imaging, digital still and video cameras, and video sensor network, low-power consumption is increasingly desirable for embedded video systems. Motion estimation typically needs considerable computations and is the basic block for many video applications. To implement low-power video systems using embedded devices and sensors, a CMOS imager has been developed that allows low-power computations on the focal plane. In this dissertation efficient motion estimation algorithms are presented to complement this platform. In the first part of dissertation we propose two algorithms regarding gradient-based optical flow estimation (OFE) to reduce computational complexity with high performance. The first is a checkerboard-type filtering (CBTF) algorithm for prefiltering and spatiotemporal derivative calculations. Another one is spatially recursive OFE frameworks using recursive LS (RLS) and/or matrix refinement to reduce the computational complexity for solving linear system of derivative values of image intensity in least-squares (LS)-OFE. From simulation results, CBTF and spatially recursive OFE show improved computational efficiency compared to conventional approaches with higher or similar performance. In the second part of dissertation we propose a new algorithm for video coding application to improve motion estimation and compensation performance in the wavelet domain. This new algorithm is for wavelet-based multi-resolution motion estimation (MRME) using temporal aliasing detection (TAD) to enhance rate-distortion (RD) performance under temporal aliasing noise. This technique gives competitive or better performance in terms of RD compared to conventional MRME and MRME with motion vector prediction through median filtering.
26

AUTOMATED CLASSIFICATION OF POWER QUALITY DISTURBANCES USING SIGNAL PROCESSING TECHNIQUES AND NEURAL NETWORKS

Settipalli, Praveen 01 January 2007 (has links)
This thesis focuses on simulating, detecting, localizing and classifying the power quality disturbances using advanced signal processing techniques and neural networks. Primarily discrete wavelet and Fourier transforms are used for feature extraction, and classification is achieved by using neural network algorithms. The proposed feature vector consists of a combination of features computed using multi resolution analysis and discrete Fourier transform. The proposed feature vectors exploit the benefits of having both time and frequency domain information simultaneously. Two different classification algorithms based on Feed forward neural network and adaptive resonance theory neural networks are proposed for classification. This thesis demonstrates that the proposed methodology achieves a good computational and error classification efficiency rate.
27

Uma Aplica??o de Redes Neurais Auto-Organizaveis ? Reconstru??o Tridimensional de Superf?cies

Brito J?nior, Agostinho de Medeiros 14 January 2005 (has links)
Made available in DSpace on 2014-12-17T14:55:02Z (GMT). No. of bitstreams: 1 AgostinhoMBJ_ Ate_cap4.pdf: 2708709 bytes, checksum: 594003810b24cc08c34b728a0e492d9d (MD5) Previous issue date: 2005-01-14 / We propose a multi-resolution approach for surface reconstruction from clouds of unorganized points representing an object surface in 3D space. The proposed method uses a set of mesh operators and simple rules for selective mesh refinement, with a strategy based on Kohonen s self-organizing map. Basically, a self-adaptive scheme is used for iteratively moving vertices of an initial simple mesh in the direction of the set of points, ideally the object boundary. Successive refinement and motion of vertices are applied leading to a more detailed surface, in a multi-resolution, iterative scheme. Reconstruction was experimented with several point sets, induding different shapes and sizes. Results show generated meshes very dose to object final shapes. We include measures of performance and discuss robustness. / ? proposto um m?todo em multi-resolu??o para reconstru??o de superf?cies a partir de nuvens de pontos, que representam a superf?cie de um objeto no espa?o 3D. O m?todo proposto utiliza um conjunto de operadores de malha e regras simples de refinamento seletivo da malha, com um estrat?gia baseada nos mapas auto-organiz?veis de Kohonen. Basicamente, um esquema auto-adaptativo ? utilizado para mover iterativamente os v?rtices de uma malha inicial simples em dire??o ao conjunto de pontos, localizados idealmente na fronteira do objeto. Sucessivos refinamentos da malha e movimenta??es dos seus v?rtices s?o aplicados, levando a superf?cies cada vez mais detalhadas, num esquema iterativo em multi-resolu??o. Experimentos de reconstru??o foram realizados com diversos conjuntos de pontos, de diferentes formas e tamanhos. Os resultados apresentam malhas geradas que s?o muito pr?ximas das formas das superf?cies impl?citas nas amostras. S?o inclu?das medidas de erro e de qualidade e discutida a robustez do algoritmo.
28

[en] MULTI-RESOLUTION VISUALIZATION OF DIGITAL ELEVATION MODELS USING GPU SHADERS / [pt] VISUALIZAÇÃO DE MODELOS DIGITAIS DE ELEVAÇÃO EM MULTIRESOLUÇÃO UTILIZANDO PROGRAMAÇÃO EM GPU

ANDREY D ALMEIDA ROCHA RODRIGUES 28 March 2018 (has links)
[pt] A visualização eficiente de grandes modelos digitais de elevação continua sendo um desafio para aplicações em tempo real. O uso direto de novas tecnologias de triangulação em placas gráficas tem uma aplicabilidade limitada no gerenciamento dos níveis de detalhe para grandes modelos. Embora o hardware gráfico seja capaz de controlar a resolução do modelo de um modo bastante eficiente, todos os dados devem estar em memória. Isso compromete a escalabilidade de soluções simples baseadas em GPU para controlar o nível de detalhe. Neste trabalho, é proposto um novo algoritmo eficiente e escalável para lidar com grandes modelos digitais de elevação. A proposta combina efetivamente a triangulação em GPU com a gerência de ladrilhos em CPU, tirando proveito da capacidade de processamento da GPU ao mesmo tempo que mantém o uso de memória gráfica dentro dos limites práticos. Também é proposta uma técnica para gerenciar o nível de detalhe da imagem aérea mapeada sobre o modelo de elevação como texturas. Ambas gerências de níveis de detalhe (geometria e textura) executam separadamente, e os ladrilhos são combinados sem a necessidade de carregar qualquer dado adicional. O gerenciamento de níveis de detalhe é então estendido para lidar com modelos com bordas irregulares e buracos. / [en] Efficient rendering of large digital elevation models remains as a challenge for real-time applications. The direct use of hardware tessellation has limited applicability for managing level of detail of large models. Although the graphics hardware is capable of controlling the resolution of patches in a very efficient manner, the whole patch data must be loaded in memory. This compromises the scalability of GPU-based naive solutions for controlling level of detail. In this work, we propose an efficient and scalable new algorithm for large digital elevation models. Our proposal effectively combines GPU tessellation with CPU tile management, taking full advantage of GPU processing capabilities while maintaining graphics-memory use under practical limits. We also propose a technique to manage level of detail of aerial imagery mapped on top of elevation models as textures. Both geometry and texture level of detail management run independently, and tiles are combined with no need to load extra data. The proposed level of detail management is then extended to handle model with irregular border and holes.
29

Efficient Image Processing Techniques for Enhanced Visualization of Brain Tumor Margins

Koglin, Ryan W. January 2014 (has links)
No description available.
30

Multi-Resolution Statistical Modeling in Space and Time With Application to Remote Sensing of the Environment

Johannesson, Gardar 12 May 2003 (has links)
No description available.

Page generated in 0.0291 seconds