• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 486
  • 62
  • 49
  • 17
  • 3
  • Tagged with
  • 615
  • 512
  • 456
  • 440
  • 437
  • 435
  • 429
  • 427
  • 425
  • 146
  • 92
  • 88
  • 85
  • 82
  • 78
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
121

Intermediate View Interpolation of Stereoscopic Images for 3D-Display

Thulin, Oskar January 2006 (has links)
This thesis investigates how disparity estimation may be used to visualize an object on a 3D-screen. The first part looks into different methods of disparity estimation, and the second part examines different ways to visualize an object from one or several stereo pairs and a disparity map. Input to the system is one or several stereo pairs, and output is a sequence of images of the input scene but from more angles. This sequence of images can be shown on Setred AB's 3D-screen. The system has high real time demands and the goal is to do the disparity estimation and visualization in real time. In the first part of the thesis, three different ways to calculate disparity maps are implemented and compared. The three methods are correlation-based, local structure-based and phase-based techniques. The correlation-based methods cannot satisfy the real-time demands due to the large number of 2D-convolutions required per pixel. The local structure-based methods have too much noise and cannot satisfy the quality requirements. Therefore, the best method by far is the phase-based method. This method has been implemented in Matlab and C and comparisons between the different implementations are presented. The quality of the disparity maps is satisfying, but the real-time demands cannot yet be fulfilled. The future work is therefore to optimize the C code and move some functions to a GPU, because a GPU can perform calculations in parallel with the CPU. Another reason is that many of the calculations are related to resizing and warping, which are well-suited to implementation on a GPU.
122

Dynamic Infrared Simulation : A Feasibility Study of a Physically Based Infrared Simulation Model

Dehlin, Jonas, Löf, Joakim January 2006 (has links)
The increased usage of infrared sensors by pilots has created a growing demand for simulated environments based on infrared radiation. This has led to an increased need for Saab to refine their existing model for simulating real-time infrared imagery, resulting in the carrying through of this thesis. Saab develops the Gripen aircraft, and they provide training simulators where pilots can train in a realistic environment. The new model is required to be based on the real-world behavior of infrared radiation, and furthermore, unlike Saab's existing model, have dynamically changeable attributes. This thesis seeks to develop a simulation model compliant with the requirements presented by Saab, and to develop the implementation of a test environment demonstrating the features and capabilities of the proposed model. All through the development of the model, the pilot training value has been kept in mind. The first part of the thesis consists of a literature study to build a theoretical base for the rest of the work. This is followed by the development of the simulation model itself and a subsequent implementation thereof. The simulation model and the test implementation are evaluated as the final step conducted within the framework of this thesis. The main conclusions of this thesis first of all includes that the proposed simulation model does in fact have its foundation in physics. It is further concluded that certain attributes of the model, such as time of day, are dynamically changeable as requested. Furthermore, the test implementation is considered to have been feasibly integrated with the current simulation environment. A plan concluding how to proceed has also been developed. The plan suggests future work with the proposed simulation model, since the evaluation shows that it performs well in comparison to the existing model as well as other products on the market.
123

Object Recognition Using Digitally Generated Images as Training Data

Ericson, Anton January 2013 (has links)
Object recognition is a much studied computer vision problem, where the task is to find a given object in an image. This Master Thesis aims at doing a MATLAB implementation of an object recognition algorithm that finds three kinds of objects in images: electrical outlets, light switches and wall mounted air-conditioning controls. Visually, these three objects are quite similar and the aim is to be able to locate these objects in an image, as well as being able to distinguish them from one another. The object recognition was accomplished using Histogram of Oriented Gradients (HOG). During the training phase, the program was trained with images of the objects to be located, as well as reference images which did not contain the objects. A Support Vector Machine (SVM) was used in the classification phase. The performance was measured for two different setups, one where the training data consisted of photos and one where the training data consisted of digitally generated images created using a 3D modeling software, in addition to the photos. The results show that using digitally generated images as training images didn’t improve the accuracy in this case. The reason for this is probably that there is too little intraclass variability in the gradients in digitally generated images, they’re too synthetic in a sense, which makes them poor at reflecting reality for this specific approach. The result might have been different if a higher number of digitally generated images had been used.
124

Point cloud densification

Forsman, Mona January 2010 (has links)
Several automatic methods exist for creating 3D point clouds extracted from 2D photos. In manycases, the result is a sparse point cloud, unevenly distributed over the scene.After determining the coordinates of the same point in two images of an object, the 3D positionof that point can be calculated using knowledge of camera data and relative orientation. A model created from a unevenly distributed point clouds may loss detail and precision in thesparse areas. The aim of this thesis is to study methods for densification of point clouds. This thesis contains a literature study over different methods for extracting matched point pairs,and an implementation of Least Square Template Matching (LSTM) with a set of improvementtechniques. The implementation is evaluated on a set of different scenes of various difficulty. LSTM is implemented by working on a dense grid of points in an image and Wallis filtering isused to enhance contrast. The matched point correspondences are evaluated with parameters fromthe optimization in order to keep good matches and discard bad ones. The purpose is to find detailsclose to a plane in the images, or on plane-like surfaces. A set of extensions to LSTM is implemented in the aim of improving the quality of the matchedpoints. The seed points are improved by Transformed Normalized Cross Correlation (TNCC) andMultiple Seed Points (MSP) for the same template, and then tested to see if they converge to thesame result. Wallis filtering is used to increase the contrast in the image. The quality of the extractedpoints are evaluated with respect to correlation with other optimization parameters and comparisonof standard deviation in x- and y- direction. If a point is rejected, the option to try again with a largertemplate size exists, called Adaptive Template Size (ATS).
125

Evolution av modulära neuronnät för styrning av en mobil robot

Carlsson, Johan January 1999 (has links)
I dagens utveckling av robotkontrollers så finner vi olika synsätt på hur vi ska angripa problemen som en robot ställs inför. Det här arbetet koncentrerar sig på artificiella neurala nät (ANN) och evolution med genetiska algoritmer och en fokusering sker på en speciell arkitektur av ANN som Stefano Nolfi presenterat. Rapporten kan ses som en fortsättning på Nolfis arbete och behandlar extensioner av fenomenet "spontan modularitet" som Nolfi beskriver. Det testproblem som används består i att utveckla ett kontrollsystem för en skräpsamlande robot. Detta arbete baseras på experiment runt detta problem med arkitekturer, vilka baseras på Nolfis spontana modularitet. Vi testar hur arkitekturerna påverkas av interna och återkopplade noder. Resultaten visar på att en spontan modularitet inte tycks påverkas positivt av återkopplade eller interna noder.
126

ARAVQ som datareducerare för en klassificeringsuppgift inom datautvinning

Ahlén, Niclas January 2004 (has links)
Adaptive Resource Allocating Vector Quantizer (ARAVQ) är en teknik för datareducering för mobila robotar. Tekniken har visats framgångsrik i enkla miljöer och det har spekulerats i att den kan fungera som ett generellt datautvinningsverktyg för tidsserier. I rapporten presenteras experiment där ARAVQ används som datareducerare på en artificiell respektive en fysiologisk datamängd inom en datautvinningskontext. Dessa datamängder skiljer sig från tidigare robotikmiljöer i och med att de beskriver objekt med diffusa eller överlappande gränser i indatarymden. Varje datamängd klassificeras efter datareduceringen med hjälp av artificiella neuronnät. Resultatet från experimenten tyder på att klassificering med ARAVQ som datareducerare uppnår ett betydligt lägre resultat än om ARAVQ inte används som datareducerare. Detta antas delvis bero på den låga generaliserbarheten hos de lösningar som skapas av ARAVQ. I diskussionen föreslås att ARAVQ skall kompletteras med en funktion för grannskap, motsvarande den som finns i Self-Organizing Map. Med ett grannskap behålls relationerna mellan de kluster som ARAVQ skapar, vilket antas minska följderna av att en beskrivning hamnar i ett grannkluster
127

Evaluation of methods for segmentation of 3D range image data / Utvärdering av metoder för segmentering av 3D-data

Schöndell, Andreas January 2011 (has links)
3D cameras delivering height data can be used for quality inspection of goods on a conveyor. It is then of interest to distinguish the important parts of the image from background and noise and further to divide these interesting parts into segments that have a strong correlation to objects on the conveyor belt. Segmentation can easily be done by thresholding in the simple case. However, in more complex situations, for example when objects touch or overlap, this does not work well. In this thesis, research and evaluation of a few different methods for segmentation of height image data are presented. The focus is to find an accurate method for segmentation of smooth irregularly shaped organic objects such as vegetables or shellfish. For evaluative purposes a database consisting of height images depicting a variety of such organic objects has been collected. We show in the thesis that a conventional gradient magnitude method is hard to beat in the general case. If, however, the objects to be segmented are heavily non-convex with a lot of crests and valleys within themselves one could be better off choosing a normalized least squares method. / 3D-kameror som levererar höjddata kan användas för kvalitetskontroll av varor på ett löpande band. Det är då av intresse att urskilja de viktiga delarna av bilden från bakgrund och brus samt även att dela upp dessa intressanta delar i segment med stark korrelans till objekten på bandet. Segmentering kan utföras genom tröskling i det enkla fallet. I mer komplexa situationer då objekt vidrör eller överlappar varandra blir det svårare. I detta examensarbete presenteras forskning och utvärdering av några olika metoder för segmentering av höjdbildsdata. Fokus ligger på att finna en noggrann metod för segmentering av mjuka släta oregelbundna objekt som grönsaker och skaldjur. I utvärderingssyfte har en databas bestående höjdbilder föreställande lite olika typer av sådana organiska objekt samlats in. Vi visar i uppstatsen att en konventionell gradientlängdsmetod är svår att slå i det generella fallet. Om objekten som ska segmenteras är kraftigt icke-konvexa å andra sidan, med en mängd krön och dalar inom varje objekt, kan man göra bättre i att välja en normaliserad minstakvadratfelsmetod.
128

Traditionella och interaktiva representationer : En jämförande robotstudie.

Stening, John January 2003 (has links)
Detta arbete tar upp och diskuterar det eventuella användandet av representationer i en autonom robot kontrollerad av ett extended sequential neural network (ESCN). Diskussionen utgår från en tidigare distinktion, framförd av bl.a. Bickhard och Terveen (1995), mellan traditionella representationer, som förespråkas av kognitivismen, och interaktiva representationer, som förespråkas av många företrädare för en mer förkroppsligad och situerad syn på kognition. Resultatet i detta arbete visar att det är möjligt, med hänvisning till robotens interna tillstånd, att påstå att roboten inte använder sig av representationer i någon traditionell bemärkelse. Resultatet visar vidare att det är möjligt att hävda att roboten använder sig av interaktiva representationer. Detta resultat är av intresse som förklaringsmodell till representationsbegreppet vid fortsatta försök att modellera kognition med hjälp av ESCN inom adaptiv robotik.
129

Quantitative image based modelling of food on aplate

M. Fard, Farhad January 2012 (has links)
The main purpose of this work is to reconstruct 3D model of an entire scene byusing two ordinary cameras. We develop a mobile phone application, based onstereo vision and image analysis algorithms, executed either locally or on a remotehost, to calculate the dietary intake using the current questionnaire and the mobilephone photographs. The information of segmented 3D models are used to calculatethe volume -and then the calories- of a person’s daily intake food. The method ischecked using different solid food samples, in different camera arrangements. Theresults shows that the method successfully reconstructs 3D model of different foodsample with high details.
130

Object Recognition with Cluster Matching

Lennartsson, Mattias January 2009 (has links)
Within this thesis an algorithm for object recognition called Cluster Matching has been developed, implemented and evaluated. The image information is sampled at arbitrary sample points, instead of interest points, and local image features are extracted. These sample points are used as a compact representation of the image data and can quickly be searched for prior known objects. The algorithm is evaluated on a test set of images and the result is surprisingly reliable and time efficient.

Page generated in 0.0391 seconds