• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 454
  • 96
  • 9
  • 2
  • Tagged with
  • 561
  • 519
  • 472
  • 468
  • 459
  • 446
  • 443
  • 443
  • 443
  • 150
  • 97
  • 91
  • 90
  • 81
  • 77
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
281

Real-time Object Recognition on a GPU

Pettersson, Johan January 2007 (has links)
Shape-Based matching (SBM) is a known method for 2D object recognition that is rather robust against illumination variations, noise, clutter and partial occlusion. The objects to be recognized can be translated, rotated and scaled. The translation of an object is determined by evaluating a similarity measure for all possible positions (similar to cross correlation). The similarity measure is based on dot products between normalized gradient directions in edges. Rotation and scale is determined by evaluating all possible combinations, spanning a huge search space. A resolution pyramid is used to form a heuristic for the search that then gains real-time performance. For SBM, a model consisting of normalized edge gradient directions, are constructed for all possible combinations of rotation and scale. We have avoided this by using (bilinear) interpolation in the search gradient map, which greatly reduces the amount of storage required. SBM is highly parallelizable by nature and with our suggested improvements it becomes much suited for running on a GPU. This have been implemented and tested, and the results clearly outperform those of our reference CPU implementation (with magnitudes of hundreds). It is also very scalable and easily benefits from future devices without effort. An extensive evaluation material and tools for evaluating object recognition algorithms have been developed and the implementation is evaluated and compared to two commercial 2D object recognition solutions. The results show that the method is very powerful when dealing with the distortions listed above and competes well with its opponents.
282

Construction of a solid 3D model of geology in Sardinia using GIS methods

Tavakoli, Saman January 2009 (has links)
Abstract 3D visualization of geological structures is a very efficient way to create a good understanding of geological features. It is not only an illustrative way for common people, but also a comprehensive method to interpret results of the work. Geologists, geophysics engineers and GIS experts sometimes need to visualize an area to accomplish their researches. It can show how sample data are distributed over the area and therefore they can be applied as suitable approach to validate the result. Among different 3D modeling methods, some are expensive or complicated. Therefore, such a methodology enabling easy and cheap creation of a 3D construction is highly demanded. However, several obstacles have been faced during the process of constructing a 3D model of geology. The main debate over suitable interpolation methods is the fact that 3D modelers may face discrepancies leading to different results even when they are working with the same set of data. Furthermore, most often part of data can be source of errors, themselves. Hence, it is extremely important to decide whether to omit those data or adopt another strategy. However, even after considering all these points, still the work may not be accurate enough to be used for scientific researches if the interpretation of work is not done precisely. This research sought to explain an approach for 3D modeling of Sedini platform in Sardinia, Italy. GIS was used as a flexible software together with Surfer and Voxler. Data manipulation, geodatabase creation and interpolation test all have been done with aid of GIS. A variety of interpolation methods available in Surfer were used to opt suitable method together with Arc view. A solid 3D model is created in Voxler environment. In Voxler, in contrary to many other 3D types of software there are four components needed to construct 3D. C value as 4th component except for XYZ coordinates was used to differentiate special features in platform and do gridding based on chosen value. With the aid of C value, one can mark layer of interest to identify it from other layers. The final result shows a 3D solid model of the Sedini platform including both surfaces and subsurfaces. An Isosurface with its unique value (Isovalue) can mark layer of interest and make it easy to interpret the results. However, the errors in some parts of model are also noticeable. Since data acquisition was done for studying geology and mineralogy characteristics of the area, there is less number of data points collected per volume according to the main goals of the initial project. Moreover, in some parts of geological border lines, the density of sample points is not high enough to estimate accurate location of lines. The study result can be applicable in a broad range of geological studies. Resource evaluation, geomorphology, structural geology and GIS are only a few examples of its application. The results of the study can be compared to the results of similar works where different softwares have been used so as to comprehend pros and cons of each as well as appropriate application of each software for a special task.     Keywords: GIS, Image Interpretation, Geodatabase, Geology, Interpolation, 3D Modeling
283

Automatiserad inlärning av detaljer för igenkänning och robotplockning / Autonomous learning of parts for recognition and robot picking

Wernersson, Björn, Södergren, Mikael January 2005 (has links)
Just how far is it possible to make learning of new parts for recognition and robot picking autonomous? This thesis initially gives the prerequisites for the steps in learning and calibration that are to be automated. Among these tasks are to select a suitable part model from numerous candidates with the help of a new part segmenter, as well as computing the spatial extent of this part, facilitating robotic collision handling. Other tasks are to analyze the part model in order to highlight correct and suitable edge segments for increasing pattern matching certainty, and to choose appropriate acceptance levels for pattern matching. Furthermore, tasks deal with simplifying camera calibration by analyzing the calibration pattern, as well as compensating for differences in perspective at great depth variations, by calculating the centre of perspective of the image. The image processing algorithms created in order to solve the tasks are described and evaluated thoroughly. This thesis shows that simplification of steps of learning and calibration, by the help of advanced image processing, really is possible.
284

Utveckling av mobiltelefonapplikation för kommunikation i ad-hoc nätverk med Bluetoothteknik

Simberg, Gustav, Viggeborn, Björn January 2005 (has links)
The purpose of this thesis is to develop an application for mobile phones that simplifies communication. The company Doberman wanted to look at possibilities to develop such an application that uses Bluetooth™ technol-ogy to communicate in ad-hoc networks. The aim has been an application to run on mobile phones in which you can send messages and files to other devices and also add a user profile with personal information to share with others. The communication will take place in temporary networks created when Bluetooth enabled devices is in range of each other. The market for mobile phones has grown rapidly over the past years and is still growing. There are many differ-ent phone models and it is difficult to find a developer platform that covers many phone models. In the beginning of this thesis an inquiry of different developer platforms has been made. The Java™ platform is supported by most phones but has limitations in accessing functions on the device. The best alternative was Symbian C++ for devices with Symbian OS. This alternative does not have the same limitations as Java and is still supported by relatively many devices. The application was then developed in Symbian C++. There are a number of different versions of Symbian OS and different GUI-platforms that runs on Symbian OS which leads to other issues in the development. We have limited the development of the application to the Series 60 platform for Symbian OS v7.0s. During design and implementation portability to other GUI-platforms has been considered. We have tested the application on emulator compatible with Symbian OS v7.0s and Symbian OS v8.0a and found some compatibility problems between the two versions. We have also tested the application on mobile phones and between emulator and the phone with corresponding OS-version no new problems occurred
285

Model-Based Eye Detection and Animation

Trejo Guerrero, Sandra January 2006 (has links)
In this thesis we present a system to extract the eye motion from a video stream containing a human face and applying this eye motion into a virtual character. By the notation eye motion estimation, we mean the information which describes the location of the eyes in each frame of the video stream. Applying this eye motion estimation into a virtual character, we achieve that the virtual face moves the eyes in the same way than the human face, synthesizing eye motion into a virtual character. In this study, a system capable of face tracking, eye detection and extraction, and finally iris position extraction using video stream containing a human face has been developed. Once an image containing a human face is extracted from the current frame of the video stream, the detection and extraction of the eyes is applied. The detection and extraction of the eyes is based on edge detection. Then the iris center is determined applying different image preprocessing and region segmentation using edge features on the eye picture extracted. Once, we have extracted the eye motion, using MPEG-4 Facial Animation, this motion is translated into the Facial Animation arameters (FAPs). Thus we can improve the quality and quantity of Facial Animation expressions that we can synthesize into a virtual character.
286

Simulering av filtrerade skärmfärger

Andersson, Christian January 2005 (has links)
This report present a working model for simulation of what happens to colors displayed on screens when they are observed through optical filters. The results of the model can be used to visually, on one screen, simulate another screen with an applied optical filter. The model can also produce CIE color difference values for the simulated screen colors. The model is data driven and requires spectral measurements for at least the screen to be simulated and the physical filters that will be used. The model is divided into three separate modules or steps where each of the modules can be easily replaced by alternative implementations or solutions. Results from tests performed show that the model can be used for prototyping of optical filters even though the tests of the specific algorithms chosen show there is room for improvements in quality. There is nothing that indicates that future work with this model would not produce better quality in its results. / Denna rapport presenterar en fungerande modell för att optiskt simulera vad som händer med färger på bildskärmar då skärmarna betraktas genom optiska filter. Resultat från modellen består av information som kan användas för visuell simulering av en skärm med applicerat filter på en annan visande skärm. Förutom ren bilddata kan modellen även producera färgskillnadsvärden som kan härledas från CIE 1931 XYZ-koordinater. Modellen är datadriven och kräver initiala mätningar på minst den skärm som ska simuleras samt filter. Hela modellen är uppdelad i tre separata moduler eller steg där de olika delarna lätt kan bytas ut för alternativa algoritmer och lösningar. Resultat från undersökningar visar på att modellen går att använda för prototypning även om de, för arbetet specifikt, valda algoritmerna för de olika stegen i undersökningen visar på brister i kvalité. Det finns inget som visar att framtida arbete där andra algoritmer valts inte skulle kunna prestera ännu bättre resultat.
287

Vehicle Detection in Monochrome Images

Lundagårds, Marcus January 2008 (has links)
The purpose of this master thesis was to study computer vision algorithms for vehicle detection in monochrome images captured by mono camera. The work has mainly been focused on detecting rear-view cars in daylight conditions. Previous work in the literature have been revised and algorithms based on edges, shadows and motion as vehicle cues have been modified, implemented and evaluated. This work presents a combination of a multiscale edge based detection and a shadow based detection as the most promising algorithm, with a positive detection rate of 96.4% on vehicles at a distance of between 5 m to 30 m. For the algorithm to work in a complete system for vehicle detection, future work should be focused on developing a vehicle classifier to reject false detections.
288

Image coding with H.264 I-frames / Stillbildskodning med H.264 I-frames

Eklund, Anders January 2007 (has links)
In this thesis work a part of the video coding standard H.264 has been implemented. The part of the video coder that is used to code the I-frames has been implemented to see how well suited it is for regular image coding. The big difference versus other image coding standards, such as JPEG and JPEG2000, is that this video coder uses both a predictor and a transform to compress the I-frames, while JPEG and JPEG2000 only use a transform. Since the prediction error is sent instead of the actual pixel values, a lot of the values are zero or close to zero before the transformation and quantization. The method is much like a video encoder but the difference is that blocks of an image are predicted instead of frames in a video sequence. / I det här examensarbetet har en del av videokodningsstandarden H.264 implementerats. Den del av videokodaren som används för att koda s.k. I-bilder har implementerats för att testa hur bra den fungerar för ren stillbildskodning. Den stora skillnaden mot andra stillbildskodningsmetoder, såsom JPEG och JPEG2000, är att denna videokodaren använder både en prediktor och en transform för att komprimera stillbilderna, till skillnad från JPEG och JPEG2000 som bara använder en transform. Eftersom prediktionsfelen skickas istället för själva pixelvärdena så är många värden lika med noll eller nära noll redan innan transformationen och kvantiseringen. Metoden liknar alltså till mycket en ren videokodare, med skillnaden att man predikterar block i en bild istället för bilder i en videosekvens.
289

Robot Tool Center Point Calibration using Computer Vision

Hallenberg, Johan January 2007 (has links)
Today, tool center point calibration is mostly done by a manual procedure. The method is very time consuming and the result may vary due to how skilled the operators are. This thesis proposes a new automated iterative method for tool center point calibration of industrial robots, by making use of computer vision and image processing techniques. The new method has several advantages over the manual calibration method. Experimental verifications have shown that the proposed method is much faster, still delivering a comparable or even better accuracy. The setup of the proposed method is very easy, only one USB camera connected to a laptop computer is needed and no contact with the robot tool is necessary during the calibration procedure. The method can be split into three different parts. Initially, the transformation between the robot wrist and the tool is determined by solving a closed loop of homogeneous transformations. Second an image segmentation procedure is described for finding point correspondences on a rotation symmetric robot tool. The image segmentation part is necessary for performing a measurement with six degrees of freedom of the camera to tool transformation. The last part of the proposed method is an iterative procedure which automates an ordinary four point tool center point calibration algorithm. The iterative procedure ensures that the accuracy of the tool center point calibration only depends on the accuracy of the camera when registering a movement between two positions.
290

Analysis of RED ONE Digital Cinema Camera and RED Workflow

Foroughi Mobarakeh, Taraneh January 2009 (has links)
RED Digital Cinema is a rather new company that has developed a camera that has shaken the world of the film industry, the RED One camera. RED One is a digital cinema camera with the characteristics of a 35mm film camera. With a custom made 12 megapixel CMOS sensor it offers images with a filmic look that cannot be achieved with many other digital cinema cameras. With a new camera comes a new set of media files to work with, which brings new software applications supporting them. RED Digital Cinema has developed several applications of their own, but there are also a few other software supporting RED. However, as of today the way of working with the RED media files together with these software applications are yet in progress. During the short amount of time that RED One has existed, many questions has risen about what workflow is the best to use. This thesis presents a theoretical background of the RED camera and some software applications supporting RED media files. The main objective is to analyze RED material as well as existing workflows and find the optimal option.

Page generated in 0.0604 seconds