• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 261
  • 44
  • 42
  • 39
  • 23
  • 18
  • 11
  • 6
  • 6
  • 5
  • 4
  • 3
  • 3
  • 2
  • 2
  • Tagged with
  • 534
  • 260
  • 161
  • 106
  • 101
  • 86
  • 84
  • 64
  • 60
  • 52
  • 50
  • 50
  • 47
  • 45
  • 45
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
351

Road Surface Modeling using Stereo Vision / Modellering av Vägyta med hjälp av Stereokamera

Lorentzon, Mattis, Andersson, Tobias January 2012 (has links)
Modern day cars are often equipped with a variety of sensors that collect information about the car and its surroundings. The stereo camera is an example of a sensor that in addition to regular images also provides distances to points in its environment. This information can, for example, be used for detecting approaching obstacles and warn the driver if a collision is imminent or even automatically brake the vehicle. Objects that constitute a potential danger are usually located on the road in front of the vehicle which makes the road surface a suitable reference level from which to measure the object's heights. This Master's thesis describes how an estimate of the road surface can be found to in order to make these height measurements. The thesis describes how the large amount of data generated by the stereo camera can be scaled down to a more effective representation in the form of an elevation map. The report discusses a method for relating data from different instances in time using information from the vehicle's motion sensors and shows how this method can be used for temporal filtering of the elevation map. For estimating the road surface two different methods are compared, one that uses a RANSAC-approach to iterate for a good surface model fit and one that uses conditional random fields for modeling the probability of different parts of the elevation map to be part of the road. A way to detect curb lines and how to use them to improve the road surface estimate is shown. Both methods for road classification show good results with a few differences that are discussed towards the end of the report. An example of how the road surface estimate can be used to detect obstacles is also included.
352

Sensordatafusion av IR- och radarbilder / Sensor data fusion of IR- and radar images

Schultz, Johan January 2004 (has links)
Den här rapporten beskriver och utvärderar ett antal algoritmer för multisensordatafusion av radar och IR/TV-data på rådatanivå. Med rådatafusion menas att fusionen ska ske innan attribut- eller objektextrahering. Attributextrahering kan medföra att information går förlorad som skulle kunna förbättra fusionen. Om fusionen sker på rådatanivå finns mer information tillgänglig och skulle kunna leda till en förbättrad attributextrahering i ett senare steg. Två tillvägagångssätt presenteras. Den ena metoden projicerar radarbilden till IR-vyn och vice versa. Fusionen utförs sedan på de par av bilder med samma dimensioner. Den andra metoden fusionerar de två ursprungliga bilderna till en volym. Volymen spänns upp av de tre dimensionerna representerade i ursprungsbilderna. Metoden utökas också genom att utnyttja stereoseende. Resultaten visar att det kan vara givande att utnyttja stereoseende då den extra informationen underlättar fusionen samt ger en mer generell lösning på problemet. / This thesis describes and evaluates a number of algorithms for multi sensor fusion of radar and IR/TV data. The fusion is performed on raw data level, that is prior to attribute extraction. The idea is that less information will be lost compared to attribute level fusion. Two methods are presented. The first method transforms the radar image to the IR-view and vice versa. The images sharing the same dimension are then fused together. The second method fuses the original images to a three dimensional volume. Another version is also presented, where stereo vision is used. The results show that stereo vision can be used with good performance and gives a more general solution to the problem.
353

3d Reconstruction Of Underwater Scenes From Uncalibrated Video Sequences

Kirli, Mustafa Yavuz 01 August 2008 (has links) (PDF)
The aim of this thesis is to reconstruct 3D representation of underwater scenes from uncalibrated video sequences. Underwater visualization is important for underwater Remotely Operated Vehicles and underwater is a complex structured environment because of inhomogeneous light absorption and light scattering by the environment. These factors make 3D reconstruction in underwater more challenging. The reconstruction consists of the following stages: Image enhancement, feature detection and matching, fundamental matrix estimation, auto-calibration, recovery of extrinsic parameters, rectification, stereo matching and triangulation. For image enhancement, a pre-processing filter is used to remove the effects of water and to enhance the images. Two feature extraction methods are examined: 1. Difference of Gaussian with SIFT feature descriptor, 2. Harris Corner Detector with grey level around the feature point. Matching is performed by finding similarities of SIFT features and by finding correlated grey levels respectively for each feature extraction method. The results show that SIFT performs better than Harris with grey level information. RANSAC method with normalized 8-point algorithm is used to estimate fundamental matrix and to reject outliers. Because of the difficulties of calibrating the cameras in underwater, auto-calibration process is examined. Rectification is also performed since it provides epipolar lines coincide with image scan lines which is helpful to stereo matching algorithms. The Graph-Cut stereo matching algorithm is used to compute corresponding pixel of each pixel in the stereo image pair. For the last stage triangulation is used to compute 3D points from the corresponding pixel pairs.
354

Matching And Reconstruction Of Line Features From Ultra-high Resolution Stereo Aerial Imagery

Ok, Ali Ozgun 01 September 2011 (has links) (PDF)
In this study, a new approach for the matching and reconstruction of line features from multispectral stereo aerial images is presented. The advantages of the existing multispectral information in aerial images are fully taken into account all over the steps of pre-processing and edge detection. To accurately describe the straight line segments, a principal component analysis technique is adapted. The initial correspondences between the stereo images are generated using a new pair-wise stereo matching approach which involves a total of seven relational constraints. The final line to line correspondences between the stereo images are established in a precise matching stage in which the final line matches are assigned by means of three novel measures and a final similarity voting scheme. Once the line matches are established, the stereo reconstruction of those matches is performed by an innovative reconstruction approach that manipulates the redundancy inherent in line pair-relations. By this way, the reconstruction of the stereo matches that are observed in a nearly-parallel geometry with the epipolar lines can also be performed accurately. The proposed approach is tested over two different urban test sites with various built-up characteristics, and as a result, very successful and promising stereo line matching and reconstruction performances are reached. Besides, the comparison of the results of the proposed approach with the results of one of the state-of-the-art stereo matching approaches proves the superiority and the potential of proposed approach.
355

Error Concealment In 3d Video

Aydogmus, Sercan 01 December 2011 (has links) (PDF)
The advances in multimedia technologies increased the interest in utilizing three dimensional (3D) video applications in mobile devices. However, wireless transmission is significantly prone to errors. Typically, packets may be corrupted or lost due to transmission errors, causing blocking artifacts. Furthermore, because of compression and coding, the error propagates through the sequence and salient features of the video cannot be recovered until a key-frame or synchronization-frame is correctly received. Without the use of concealment and enhancement techniques, visible artifacts would inevitably and regularly appear in the decoded stream. In this thesis, error concealment techniques for full frame losses in depth plus video and stereo video structures are implemented and compared. Temporal and interview correlations are utilized to predict the lost frames while considering the memory usage and computational complexity.The concealment methods are implemented on jm17.2 decoder which is based on H.264/AVC specifications [1]. The simulation results are compared with the simple frame copy (FC) method for different sequences having different characteristics.
356

Variational image processing algorithms for the stereoscopic space-time reconstruction of water waves

Gallego Bonet, Guillermo 19 January 2011 (has links)
A novel video observational method for the space-time stereoscopic reconstruction of dynamic surfaces representable as graphs, such as ocean waves, is developed. Variational optimization algorithms combining image processing, computer vision and partial differential equations are designed to address the problem of the recovery of the shape of an object's surface from sequences of synchronized multi-view images. Several theoretical and numerical paths are discussed to solve the problem. The variational stereo method developed in this thesis has several advantages over existing 3-D reconstruction algorithms. Our method follows a top-down approach or object-centered philosophy in which an explicit model of the target object in the scene is devised and then related to image measurements. The key advantages of our method are the coherence (smoothness) of the reconstructed surface caused by a coherent object-centered design, the robustness to noise due to a generative model of the observed images, the ability to handle surfaces with smooth textures where other methods typically fail to provide a solution, and the higher resolution achieved due to a suitable graph representation of the object's surface. The method provides competitive results with respect to existing variational reconstruction algorithms. However, our method is based upon a simplified but complete physical model of the scene that allows the reconstruction process to include physical properties of the object's surface that are otherwise difficult to take into account with existing reconstruction algorithms. Some initial steps are taken toward incorporating the physics of ocean waves in the stereo reconstruction process. The developed method is applied to empirical data of ocean waves collected at an off-shore oceanographic platform located off the coast of Crimea, Ukraine. An empirically-based physical model founded upon current ocean engineering standards is used to validate the results. Our findings suggest that this remote sensing observational method has a broad impact on off-shore engineering to enrich the understanding of sea states, enabling improved design of off-shore structures. The exploration of ways to incorporate dynamical properties, such as the wave equation, in the reconstruction process is discussed for future research.
357

A Prototype For An Interactive And Dynamic Image-Based Relief Rendering System / En prototyp för ett interaktivt och dynamisktbildbaserat relief renderingssystem

Bakos, Niklas January 2002 (has links)
<p>In the research of developing arbitrary and unique virtual views from a real- world scene, a prototype of an interactive relief texture mapping system capable of processing video using dynamic image-based rendering, is developed in this master thesis. The process of deriving depth from recorded video using binocular stereopsis is presented, together with how the depth information is adjusted to be able to manipulate the orientation of the original scene. When the scene depth is known, the recorded organic and dynamic objects can be seen from viewpoints not available in the original video.</p>
358

Assessment of Grapevine Vigour Using Image Processing / Tillämpning av bildbehandlingsmetoder inom vinindustrin

Bjurström, Håkan, Svensson, Jon January 2002 (has links)
<p>This Master’s thesis studies the possibility of using image processing as a tool to facilitate vine management, in particular shoot counting and assessment of the grapevine canopy. Both are areas where manual inspection is done today. The thesis presents methods of capturing images and segmenting different parts of a vine. It also presents and evaluates different approaches on how shoot counting can be done. Within canopy assessment, the emphasis is on methods to estimate canopy density. Other possible assessment areas are also discussed, such as canopy colour and measurement of canopy gaps and fruit exposure. An example of a vine assessment system is given.</p>
359

Localisation d'un robot mobile autonome en environnements naturels

Mallet, Anthony 02 July 2001 (has links) (PDF)
Cette thèse aborde le problème de la localisation d'un robot mobile autonome en environnements naturels. La première partie du mémoire s'intéresse aux méthodes algorithmiques pouvant fournir une estimation de position et une classification de ces différentes méthodes en trois grandes catégories est proposée. La première classe présentée est dite " locale ". Elle repose sur un niveau d'abstraction très faible et utilise des données " brutes " ; la position du robot est calculée de façon incrémentale, par cumul de déplacements élémentaires. L'odométrie appartient à cette classe et ce document présente une analyse de ses performances. Une méthode originale d'estimation visuelle des déplacements est également proposée. Elle utilise la stéreo-vision et effectue le suivi de pixels dans une séquence d'images vidéo pour en déduire des déplacements élémentaires. Cette méthode permet de pallier certains inconvénients de l'odométrie, notamment en terrains accidentés. Les positions produites par les méthodes locales sont cependant sujettes à des dérives inexorables. Il est donc nécessaire d'utiliser des méthodes de la seconde classe --- dite " globale ". Celles-ci permettent, dans certaines circonstances, de réduire la dérive des méthodes locales (on trouve notamment dans cette catégorie les méthodes de localisation sur amers). Une méthode basée sur des cartes d'élévations est présentée. Cette représentation permet un recalage en position par minimisation d'une distance entre une image 3D locale et le modèle. De plus, grâce à une technique particulière de construction, une mémorisation de la trajectoire du robot permet de rétro-propager des modifications sur certaines positions et de garantir ainsi une (meilleure) cohérence spatiale du modèle. La dernière catégorie, regroupant les méthodes de localisation par rapport à un modèle initial ou les méthodes " absolues ", ne sont pas abordées dans ce document. La deuxième partie du mémoire analyse les problèmes posés par l'intégration des diverses méthodes à bord d'un robot dans le but de doter celui-ci d'un éventail de fonctionnalités, éventuellement redondantes. Une démonstration de navigation, réalisée grâce au robot Lama du groupe RIA du LAAS, est détaillée. L'intégration a été réalisée à travers le formalisme défini dans l'architecture LAAS (architecture pour les systèmes autonomes) et a abouti à la précision, au sein de cette architecture, du cadre nécessaire aux fonctionnalités de localisation. Notamment, les problèmes du datage des données et de fusion des informations de position sont pris en compte et une réflexion sur les transferts de données est proposée.
360

Prototyping methodology of image processing applications on heterogeneous parallel systems

Zhang, Jinglin 19 December 2013 (has links) (PDF)
The work presented in this thesis takes place in a context of growing demand for image and video applications on parallel embedded systems. The limitations and lack of flexibility of current design with parallel embedded systems make increasingly complicated to implement applications, particularly on heterogeneous systems. But Open Computing Language (OpenCL) is a new framework for fully employ the capability of computation of general purpose processors or embedded processors. In the meantime, some rapid prototyping tools to design systems are proposed to generate a reliably prototype or automatically implement the image and video applications on embedded systems. The goal of this thesis was to evaluate and to improve design processes for embedded systems, especially based on the dataflow approach (high level of abstraction) and OpenCL approach (intermediate level of abstraction). This challenge is tackled by several projects including the collaborative project COMPA which studies a framework based on the Orcc, Preesm and HMPP tools. In this context, this thesis aims to validate and to evaluate the framework with motion estimation and stereo matching algorithms. For this aim, algorithms have been described using the high-level RVC-CAL language. With the help of Orcc, Preesm, and HMPP tools, we generated and verified C code or OpenCL code or CUDA code for heterogeneous platforms based on multi-core CPU and GPU. We also studied the implementations of these algorithms onto the last generation of many-core for embedded system called MPPA and developed by KALRAY. We proposed three algorithms. One is a parallelized motion estimation method for heterogeneous system based on one CPU and one GPU: we developed one basic method to balance the workload distribution on such heterogeneous system. The second algorithm is a real-time stereo matching method that adopts combined costs and costs aggregation with square size step to implement on laptop's GPU platform: our experimental results outperform other baseline methods about tradeoff between matching accuracy and time-efficiency. The third algorithm is a joint motion-based video stereo matching method that uses the motion vectors calculated by the first algorithm to build the support region for the second algorithm: our experimental results outperform the stereo video matching methods in the test sequences with abundant movement even in large amounts of noise.

Page generated in 0.0319 seconds