Spelling suggestions: "subject:"autonome system""
121 |
Segmentering och klassificering av LiDAR-data / Segmentation and Classification of LiDAR dataLandgård, Jonas January 2005 (has links)
With numerous applications in both military and civilian life, the demand for accurate 3D models of real world environments increases rapidly. Using an airborne laser scanner for the raw data acquisition and robust methods for data processing, the researchers at the Swedish Defence Research Agency (FOI) in Linköping hope to fully automate the modeling process. The work of this thesis has mainly been focused on three areas: ground estimation, image segmentation and classification. Procedures have in each of these areas been developed, leading to a new algorithm for ground estimation, a number of segmentation methods as well as a full comparison of various decision values for an object based classification. The ground estimation algorithm developed has yielded good results compared to the method based on active contours previously elaborated at FOI. The computational effort needed by the new method has been greatly reduced compared to the former, as performance, particularly in urban areas, has been improved. The segmentation methods introduced have shown promising results in separating different types of objects. A new set of decision values and descriptors for the object based classifier has been suggested, which, according to tests, prove to be more efficient than the set p reviously used. / Med många tillämpningar både inom det civila och militära, ökar efterfrågan på noggranna och korrekta omvärldesmodeller snabbt. Forskare på FOI, Totalförsvarets Forskningsinstitut, arbetar med att fullt ut kunna automatisera den process som genererar dessa tredimensionella modeller av verkliga miljöer. En luftburen laserradar används för datainsamlingen och robusta metoder är under ständig utveckling för den efterföljande databehandlingen. Arbetet som presenteras i denna rapport kan delas in i tre huvudområden: skattning av markyta, segmentering av data samt klassificering. Metoder inom varje område har utvecklats vilket lett fram till en ny algoritm för markestimering, en rad metoder för segmentering samt en noggrann jämförelse av olika beslutsvärden för en objektbaserad klassificering. Markskattningsalgoritmen har visat sig vara effektiv i jämförelse med en metod baserad på aktiva konturer som sedan tidigare utvecklats på FOI. Beräkningsbördan för den nya metoden är endast en bråkdel av den förra, samtidigt som prestandan, särskilt i urbana miljöer, har kunnat förbättras. De segmenteringsmetoder som introducerats har visat på lovande resultat vad gäller möjligheten att särskilja olika typer av objekt. Slutligen har en ny uppsättning deskriptorer och beslutsvärden till den objektbaserade klassificeraren föreslagits. Den har enligt de tester som presenteras i rapporten visats sig vara mer effektiv än den uppsättning som använts fram till idag.
|
122 |
Underwater 3-D imaging with laser triangulationNorström, Christer January 2006 (has links)
The objective of this master thesis was to study the performance of an active triangulation system for 3-D imaging in underwater applications. Structured light from a 20 mW laser and a conventional video camera was used to collect data for generation of 3-D images. Different techniques to locate the laser line and transform it into spatial coordinates were developed and evaluated. A field- and a laboratory trial were performed. From the trials we can conclude that the distance resolution is much higher than the lateral- and longitudinal resolution. The lateral resolution can be improved either by using a high frame rate camera or simply by using a low scanning speed. It is possible to obtain a range resolution of less than a millimeter. The maximum range of vision was 5 meters under water measured on a white target and 3 meters for a black target in clear sea water. These results are however dependent on environmental and system parameters such as laser power, laser beam divergence and water turbidity. A higher laser power would for example increase the maximum range.
|
123 |
Standardized Volume Rendering Protocols for Magnetic Resonance Imaging using Maximum-Likelihood ModelingOthberg, Fredrik January 2006 (has links)
Volume rendering (VRT) has been used with great success in studies of patients using computed tomography (CT), much because of the possibility of standardizing the rendering protocols. When using magnetic resonance imaging (MRI), this procedure is considerably more difficult, since the signal from a given tissue can vary dramatically, even for the same patient. This thesis work focuses on how to improve the presentation of MRI data by using VRT protocols including standardized transfer functions. The study is limited to exclusively examining data from patients with suspected renal artery stenosis. A total number of 11 patients are examined. A statistical approach is used to standardize the volume rendering protocols. The histogram of the image volume is modeled as the sum of two gamma distributions, corresponding to vessel and background voxels. Parameters describing the gamma distributions are estimated with a Maximum-likelihood technique, so that expectation (E1 and E2) and standard deviation of the two voxel distributions can be calculated from the histogram. These values are used to generate the transfer function. Different combinations of the values from the expectation and standard deviation were studied in a material of 11 MR angiography datasets, and the visual result was graded by a radiologist. By comparing the grades, it turned out that using only the expectation of the background distribution (E1) and vessel distribution (E2) gave the best result. The opacity is then defined with a value of 0 up to a signal threshold of E1, then increasing linearly up to 50 % at a second threshold E2, and after that a constant opacity of 50 %. The brightness curve follows the opacity curve to E2, after which it continues to increase linearly up to 100%. A graphical user interface was created to facilitate the user-control of the volumes and transfer functions. The result from the statistical calculations is displayed in the interface and is used to view and manipulate the transfer function directly in the volume histogram. A transfer function generated with the Maximum-likelihood VRT method (ML-VRT) gave a better visual result in 10 of the 11 cases than when using a transfer function not adapting to signal intensity variations.
|
124 |
Range Gated Viewing with Underwater CameraAndersson, Adam January 2005 (has links)
The purpose of this master thesis, performed at FOI, was to evaluate a range gated underwater camera, for the application identification of bottom objects. The master thesis was supported by FMV within the framework of “arbetsorder Systemstöd minjakt (Jan Andersson, KC Vapen)”. The central part has been field trials, which have been performed in both turbid and clear water. Conclusions about the performance of the camera system have been done, based on resolution and contrast measurements during the field trials. Laboratory testing has also been done to measure system specific parameters, such as the effective gate profile and camera gate distances. The field trials shows that images can be acquired at significantly longer distances with the tested gated camera, compared to a conventional video camera. The distance where the target can be detected is increased by a factor of 2. For images suitable for mine identification, the increase is about 1.3. However, studies of the performance of other range gated systems shows that the increase in range for mine identification can be about 1.6. Gated viewing has also been compared to other technical solutions for underwater imaging.
|
125 |
Optical Flow Computation on Compute Unified Device Architecture / Optiskt flödeberäkning med CUDARingaby, Erik January 2008 (has links)
There has been a rapid progress of the graphics processor the last years, much because of the demands from computer games on speed and image quality. Because of the graphics processor’s special architecture it is much faster at solving parallel problems than the normal processor. Due to its increasing programmability it is possible to use it for other tasks than it was originally designed for. Even though graphics processors have been programmable for some time, it has been quite difficult to learn how to use them. CUDA enables the programmer to use C-code, with a few extensions, to program NVIDIA’s graphics processor and completely skip the traditional programming models. This thesis investigates if the graphics processor can be used for calculations without knowledge of how the hardware mechanisms work. An image processing algorithm calculating the optical flow has been implemented. The result shows that it is rather easy to implement programs using CUDA, but some knowledge of how the graphics processor works is required to achieve high performance.
|
126 |
Camera Based Navigation : Matching between Sensor reference and Video imageOlgemar, Markus January 2008 (has links)
an Internal Navigational System and a Global Navigational Satellite System (GNSS). In navigational warfare the GNSS can be jammed, therefore are a third navigational system is needed. The system that has been tried in this thesis is camera based navigation. Through a video camera and a sensor reference the position is determined. This thesis will process the matching between the sensor reference and the video image. Two methods have been implemented: normalized cross correlation and position determination through a homography. Normalized cross correlation creates a correlation matrix. The other method uses point correspondences between the images to determine a homography between the images. And through the homography obtain a position. The more point correspondences the better the position determination will be. The results have been quite good. The methods have got the right position when the Euler angles of the UAV have been known. Normalized cross correlation has been the best method of the tested methods.
|
127 |
Make it Meaningful : Semantic Segmentation of Three-Dimensional Urban Scene ModelsLind, Johan January 2017 (has links)
Semantic segmentation of a scene aims to give meaning to the scene by dividing it into meaningful — semantic — parts. Understanding the scene is of great interest for all kinds of autonomous systems, but manual annotation is simply too time consuming, which is why there is a need for an alternative approach. This thesis investigates the possibility of automatically segmenting 3D-models of urban scenes, such as buildings, into a predetermined set of labels. The approach was to first acquire ground truth data by manually annotating five 3D-models of different urban scenes. The next step was to extract features from the 3D-models and evaluate which ones constitutes a suitable feature space. Finally, three supervised learners were implemented and evaluated: k-Nearest Neighbour (KNN), Support Vector Machine (SVM) and Random Classification Forest (RCF). The classifications were done point-wise, classifying each 3D-point in the dense point cloud belonging to the model being classified. The result showed that the best suitable feature space is not necessarily the one containing all features. The KNN classifier got the highest average accuracy overall models — classifying 42.5% of the 3D points correct. The RCF classifier managed to classify 66.7% points correct in one of the models, but had worse performance for the rest of the models and thus resulting in a lower average accuracy compared to KNN. In general, KNN, SVM, and RCF seemed to have different benefits and drawbacks. KNN is simple and intuitive but by far the slowest classifier when dealing with a large set of training data. SVM and RCF are both fast but difficult to tune as there are more parameters to adjust. Whether the reason for obtaining the relatively low highest accuracy was due to the lack of ground truth training data, unbalanced validation models, or the capacity of the learners, was never investigated due to a limited time span. However, this ought to be investigated in future studies.
|
128 |
UKF-SLAM Implementation for the Optical Navigation System of a Lunar LanderGarcia, Laura January 2017 (has links)
No description available.
|
129 |
Evaluation of Aerial Image Stereo Matching Methods for Forest Variable EstimationSvensk, Joakim January 2017 (has links)
This work investigates the landscape of aerial image stereo matching (AISM) methods suitable for large scale forest variable estimation. AISM methods are an important source of remotely collected information used in modern forestry to keep track of a growing forest's condition. A total of 17 AISM methods are investigated, out of which 4 are evaluated by processing a test data set consisting of three aerial images. The test area is located in southern Sweden, consisting of mainly Norway Spruce and Scots Pine. From the resulting point clouds and height raster images, a total of 30 different metrics of both height and density types are derived. Linear regression is used to fit functions from metrics derived from AISM data to a set of forest variables including tree height (HBW), tree diameter (DBW), basal area, volume. As ground truth, data collected by dense airborne laser scanning is used. Results are presented as RMSE and standard deviation concluded from the linear regression. For tree height, tree diameter, basal area, volume the RMSE ranged from 7.442% to 10.11%, 11.58% to 13.96%, 32.01% to 35.10% and 34.01% to 38.26% respectively. The results concluded that all four tested methods achieved comparable estimation quality although showing small differences among them. Keystone and SURE performed somewhat better while MicMac placed third and Photoscan achieved the less accurate result.
|
130 |
Runway detection in LWIR video : Real time image processing and presentation of sensor dataCedernaes, Erasmus January 2016 (has links)
Runway detection in long wavelength infrared (LWIR) video could potentially increase the number of successful landings by increasing the situational awareness of pilots and verifying a correct approach. A method for detecting runways in LWIR video was therefore proposed and evaluated for robustness, speed and FPGA acceleration. The proposed algorithm improves the detection probability by making assumptions of the runway appearance during approach, as well as by using a modified Hough line transform and a symmetric search of peaks in the accumulator that is returned by the Hough line transform. A video chain was implemented on a Xilinx ZC702 Development card with input and output via HDMI through an expansion card. The video frames were buffered to RAM, and the detection algorithm ran on the CPU, which however did not meet the real-time requirement. Strategies were proposed that would improve the processing speed by either acceleration in hardware or algorithmic changes.
|
Page generated in 0.0517 seconds