• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 502
  • 62
  • 49
  • 17
  • 3
  • Tagged with
  • 631
  • 528
  • 472
  • 456
  • 453
  • 451
  • 445
  • 443
  • 441
  • 156
  • 93
  • 92
  • 88
  • 83
  • 81
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
211

Underwater 3-D imaging with laser triangulation

Norström, Christer January 2006 (has links)
The objective of this master thesis was to study the performance of an active triangulation system for 3-D imaging in underwater applications. Structured light from a 20 mW laser and a conventional video camera was used to collect data for generation of 3-D images. Different techniques to locate the laser line and transform it into spatial coordinates were developed and evaluated. A field- and a laboratory trial were performed. From the trials we can conclude that the distance resolution is much higher than the lateral- and longitudinal resolution. The lateral resolution can be improved either by using a high frame rate camera or simply by using a low scanning speed. It is possible to obtain a range resolution of less than a millimeter. The maximum range of vision was 5 meters under water measured on a white target and 3 meters for a black target in clear sea water. These results are however dependent on environmental and system parameters such as laser power, laser beam divergence and water turbidity. A higher laser power would for example increase the maximum range.
212

Standardized Volume Rendering Protocols for Magnetic Resonance Imaging using Maximum-Likelihood Modeling

Othberg, Fredrik January 2006 (has links)
Volume rendering (VRT) has been used with great success in studies of patients using computed tomography (CT), much because of the possibility of standardizing the rendering protocols. When using magnetic resonance imaging (MRI), this procedure is considerably more difficult, since the signal from a given tissue can vary dramatically, even for the same patient. This thesis work focuses on how to improve the presentation of MRI data by using VRT protocols including standardized transfer functions. The study is limited to exclusively examining data from patients with suspected renal artery stenosis. A total number of 11 patients are examined. A statistical approach is used to standardize the volume rendering protocols. The histogram of the image volume is modeled as the sum of two gamma distributions, corresponding to vessel and background voxels. Parameters describing the gamma distributions are estimated with a Maximum-likelihood technique, so that expectation (E1 and E2) and standard deviation of the two voxel distributions can be calculated from the histogram. These values are used to generate the transfer function. Different combinations of the values from the expectation and standard deviation were studied in a material of 11 MR angiography datasets, and the visual result was graded by a radiologist. By comparing the grades, it turned out that using only the expectation of the background distribution (E1) and vessel distribution (E2) gave the best result. The opacity is then defined with a value of 0 up to a signal threshold of E1, then increasing linearly up to 50 % at a second threshold E2, and after that a constant opacity of 50 %. The brightness curve follows the opacity curve to E2, after which it continues to increase linearly up to 100%. A graphical user interface was created to facilitate the user-control of the volumes and transfer functions. The result from the statistical calculations is displayed in the interface and is used to view and manipulate the transfer function directly in the volume histogram. A transfer function generated with the Maximum-likelihood VRT method (ML-VRT) gave a better visual result in 10 of the 11 cases than when using a transfer function not adapting to signal intensity variations.
213

Range Gated Viewing with Underwater Camera

Andersson, Adam January 2005 (has links)
The purpose of this master thesis, performed at FOI, was to evaluate a range gated underwater camera, for the application identification of bottom objects. The master thesis was supported by FMV within the framework of “arbetsorder Systemstöd minjakt (Jan Andersson, KC Vapen)”. The central part has been field trials, which have been performed in both turbid and clear water. Conclusions about the performance of the camera system have been done, based on resolution and contrast measurements during the field trials. Laboratory testing has also been done to measure system specific parameters, such as the effective gate profile and camera gate distances. The field trials shows that images can be acquired at significantly longer distances with the tested gated camera, compared to a conventional video camera. The distance where the target can be detected is increased by a factor of 2. For images suitable for mine identification, the increase is about 1.3. However, studies of the performance of other range gated systems shows that the increase in range for mine identification can be about 1.6. Gated viewing has also been compared to other technical solutions for underwater imaging.
214

Optical Flow Computation on Compute Unified Device Architecture / Optiskt flödeberäkning med CUDA

Ringaby, Erik January 2008 (has links)
There has been a rapid progress of the graphics processor the last years, much because of the demands from computer games on speed and image quality. Because of the graphics processor’s special architecture it is much faster at solving parallel problems than the normal processor. Due to its increasing programmability it is possible to use it for other tasks than it was originally designed for. Even though graphics processors have been programmable for some time, it has been quite difficult to learn how to use them. CUDA enables the programmer to use C-code, with a few extensions, to program NVIDIA’s graphics processor and completely skip the traditional programming models. This thesis investigates if the graphics processor can be used for calculations without knowledge of how the hardware mechanisms work. An image processing algorithm calculating the optical flow has been implemented. The result shows that it is rather easy to implement programs using CUDA, but some knowledge of how the graphics processor works is required to achieve high performance.
215

Camera Based Navigation : Matching between Sensor reference and Video image

Olgemar, Markus January 2008 (has links)
an Internal Navigational System and a Global Navigational Satellite System (GNSS). In navigational warfare the GNSS can be jammed, therefore are a third navigational system is needed. The system that has been tried in this thesis is camera based navigation. Through a video camera and a sensor reference the position is determined. This thesis will process the matching between the sensor reference and the video image. Two methods have been implemented: normalized cross correlation and position determination through a homography. Normalized cross correlation creates a correlation matrix. The other method uses point correspondences between the images to determine a homography between the images. And through the homography obtain a position. The more point correspondences the better the position determination will be. The results have been quite good. The methods have got the right position when the Euler angles of the UAV have been known. Normalized cross correlation has been the best method of the tested methods.
216

Make it Meaningful : Semantic Segmentation of Three-Dimensional Urban Scene Models

Lind, Johan January 2017 (has links)
Semantic segmentation of a scene aims to give meaning to the scene by dividing it into meaningful — semantic — parts. Understanding the scene is of great interest for all kinds of autonomous systems, but manual annotation is simply too time consuming, which is why there is a need for an alternative approach. This thesis investigates the possibility of automatically segmenting 3D-models of urban scenes, such as buildings, into a predetermined set of labels. The approach was to first acquire ground truth data by manually annotating five 3D-models of different urban scenes. The next step was to extract features from the 3D-models and evaluate which ones constitutes a suitable feature space. Finally, three supervised learners were implemented and evaluated: k-Nearest Neighbour (KNN), Support Vector Machine (SVM) and Random Classification Forest (RCF). The classifications were done point-wise, classifying each 3D-point in the dense point cloud belonging to the model being classified. The result showed that the best suitable feature space is not necessarily the one containing all features. The KNN classifier got the highest average accuracy overall models — classifying 42.5% of the 3D points correct. The RCF classifier managed to classify 66.7% points correct in one of the models, but had worse performance for the rest of the models and thus resulting in a lower average accuracy compared to KNN. In general, KNN, SVM, and RCF seemed to have different benefits and drawbacks. KNN is simple and intuitive but by far the slowest classifier when dealing with a large set of training data. SVM and RCF are both fast but difficult to tune as there are more parameters to adjust. Whether the reason for obtaining the relatively low highest accuracy was due to the lack of ground truth training data, unbalanced validation models, or the capacity of the learners, was never investigated due to a limited time span. However, this ought to be investigated in future studies.
217

UKF-SLAM Implementation for the Optical Navigation System of a Lunar Lander

Garcia, Laura January 2017 (has links)
No description available.
218

Evaluation of Aerial Image Stereo Matching Methods for Forest Variable Estimation

Svensk, Joakim January 2017 (has links)
This work investigates the landscape of aerial image stereo matching (AISM) methods suitable for large scale forest variable estimation. AISM methods are an important source of remotely collected information used in modern forestry to keep track of a growing forest's condition. A total of 17 AISM methods are investigated, out of which 4 are evaluated by processing a test data set consisting of three aerial images. The test area is located in southern Sweden, consisting of mainly Norway Spruce and Scots Pine. From the resulting point clouds and height raster images, a total of 30 different metrics of both height and density types are derived. Linear regression is used to fit functions from metrics derived from AISM data to a set of forest variables including tree height (HBW), tree diameter (DBW), basal area, volume. As ground truth, data collected by dense airborne laser scanning is used. Results are presented as RMSE and standard deviation concluded from the linear regression. For tree height, tree diameter, basal area, volume the RMSE ranged from 7.442% to 10.11%, 11.58% to 13.96%, 32.01% to 35.10% and 34.01% to 38.26% respectively. The results concluded that all four tested methods achieved comparable estimation quality although showing small differences among them. Keystone and SURE performed somewhat better while MicMac placed third and Photoscan achieved the less accurate result.
219

Runway detection in LWIR video : Real time image processing and presentation of sensor data

Cedernaes, Erasmus January 2016 (has links)
Runway detection in long wavelength infrared (LWIR) video could potentially increase the number of successful landings by increasing the situational awareness of pilots and verifying a correct approach. A method for detecting runways in LWIR video was therefore proposed and evaluated for robustness, speed and FPGA acceleration. The proposed algorithm improves the detection probability by making assumptions of the runway appearance during approach, as well as by using a modified Hough line transform and a symmetric search of peaks in the accumulator that is returned by the Hough line transform. A video chain was implemented on a Xilinx ZC702 Development card with input and output via HDMI through an expansion card. The video frames were buffered to RAM, and the detection algorithm ran on the CPU, which however did not meet the real-time requirement. Strategies were proposed that would improve the processing speed by either acceleration in hardware or algorithmic changes.
220

Segmentation of Clouds in Satellite Images / Klassificering av Moln i Satellitbilder

Gasslander, Maja January 2016 (has links)
The usage of 3D modelling is increasing fast, both for civilian and military areas, such as navigation, targeting and urban planning. When creating a 3D model from satellite images, clouds canbe problematic. Thus, automatic detection ofclouds inthe imagesis ofgreat use. This master thesis was carried out at Vricon, who produces 3D models of the earth from satellite images.This thesis aimed to investigate if Support Vector Machines could classify pixels into cloud or non-cloud, with a combination of texture and color as features. To solve the stated goal, the task was divided into several subproblems, where the first part was to extract features from the images. Then the images were preprocessed before fed to the classifier. After that, the classifier was trained, and finally evaluated.The two methods that gave the best results in this thesis had approximately 95 % correctly classified pixels. This result is better than the existing cloud segmentation method at Vricon, for the tested terrain and cloud types.

Page generated in 0.0821 seconds