• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 664
  • 207
  • 62
  • 60
  • 53
  • 45
  • 12
  • 11
  • 7
  • 6
  • 6
  • 6
  • 6
  • 6
  • 6
  • Tagged with
  • 1325
  • 1325
  • 211
  • 205
  • 159
  • 140
  • 139
  • 131
  • 117
  • 116
  • 114
  • 110
  • 110
  • 108
  • 101
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
391

Colorimetric and Multispectral Image Acquisition

Nyström, Daniel January 2006 (has links)
<p>The trichromatic principle of representing color has for a long time been dominating in color imaging. The reason is the trichromatic nature of human color vision, but as the characteristics of typical color imaging devices are different from those of human eyes, there is a need to go beyond the trichromatic approach. The interest for multi-channel imaging, i.e. increasing the number of color channels, has made it an active research topic with a substantial potential of application.</p><p>To achieve consistent color imaging, one needs to map the imaging-device data to the device-independent colorimetric representations CIEXYZ or CIELAB, the key concept of color management. As the color coordinates depend not only on the reflective spectrum of the object but also on the spectral properties of the illuminant, the colorimetric representation suffers from metamerism, i.e. objects of the same color under a specific illumination may appear different when they are illuminated by other light sources. Furthermore, when the sensitivities of the imaging device differ from the CIE color matching functions, two spectra that appear different for human observers may result in identical device response. On contrary, in multispectral imaging, color is represented by the object’s physical characteristics namely the spectrum which is illuminant independent. With multispectral imaging, different spectra are readily distinguishable, no matter they are metameric or not. The spectrum can then be transformed to any color space and be rendered under any illumination.</p><p>The focus of the thesis is high quality image-acquisition in colorimetric and multispectral formats. The image acquisition system used is an experimental system with great flexibility in illumination and image acquisition setup. Besides the conventional trichromatic RGB filters, the system also provides the possibility of acquiring multi-channel images, using 7 narrowband filters. A thorough calibration and characterization of all the components involved in the image acquisition system is carried out. The spectral sensitivity of the CCD camera, which can not be derived by direct measurements, is estimated using least squares regression, optimizing the camera response to measured spectral reflectance of carefully selected color samples.</p><p>To derive mappings to colorimetric and multispectral representations, two conceptually different approaches are used. In the model-based approach, the physical model describing the image acquisition process is inverted, to reconstruct spectral reflectance from the recorded device response. In the empirical approach, the characteristics of the individual components are ignored, and the functions are derived by relating the device response for a set of test colors to the corresponding colorimetric and spectral measurements, using linear and polynomial least squares regression.</p><p>The results indicate that for trichromatic imaging, accurate colorimetric mappings can be derived by the empirical approach, using polynomial regression to CIEXYZ and CIELAB. Because of the media-dependency, the characterization functions should be derived for each combination of media and colorants. However, accurate spectral data reconstruction requires for multi-channel imaging, using the model-based approach. Moreover, the model-based approach is general, since it is based on the spectral characteristics of the image acquisition system, rather than the characteristics of a set of color samples.</p> / Report code: LiU-TEK-LIC- 2006:70
392

Study of Local Binary Patterns

Lindahl, Tobias January 2007 (has links)
<p>This Masters thesis studies the concept of local binary patterns, which describe the neighbourhood of a pixel in a digital image by binary derivatives. The operator is often used in texture analysis and has been successfully used in facial recognition.</p><p>This thesis suggests two methods based on some basic ideas of Björn Kruse and studies of literature on the subject. The first suggested method presented is an algorithm which reproduces images from their local binary patterns by a kind of integration of the binary derivatives. This method is a way to prove the preservation of information. The second suggested method is a technique of interpolating missing pixels in a single CCD camera based on local binary patterns and machine learning. The algorithm has shown some very promising results even though in its current form it does not keep up with the best algorithms of today.</p>
393

Evaluation of tone mapping operators for use in real time environments

Hellsten, Jonas January 2007 (has links)
<p>As real time visualizations become more realistic it also becomes more important to simulate the perceptual effects of the human visual system. Such effects include the response to varying illumination, glare and differences between photopic and scotopic vision. This thesis evaluates several different tone mapping methods to allow a greater dynamic range to be used in real time visualisations. Several tone mapping methods have been implemented in the Avalanche Game Engine and evaluated using a small test group. To increase immersion in the visualization several filters aimed to simulate perceptual effects has also been implemented. The primary goal of these filters is to simulate scotopic vision. The tests showed that two tone mapping methods would be suitable for the environment used in the tests. The S-curve tone mapping method gave the best result while the Mean Value method gave good results while being the simplest to implement and the cheapest. The test subjects agreed that the simulation of scotopic vision enhanced the immersion in a visualization. The primary difficulties in this work has been lack of dynamic range in the input images and the challenges in coding real time graphics using a graphics processing unit.</p>
394

Direction estimation on 3D-tomography images of jawbones

Mazeyev, Yuri January 2008 (has links)
<p>The present work expose a technique of estimation of optimal direction for placing dental implant. A volumetric computed tomography (CT) scan is used as a help of the following searches. The work offers criteria of the optimal implant placement direction and methods of evaluation on direction’s significance. The technique utilizes structure tensor to find a normal to the jawbone surface. Direction of that normal is then used as initial direction for search of optimal direction.</p><p>The technique described in the present work aimed to support doctor’s decisions during dental implantation treatment.</p>
395

Construction of a solid 3D model of geology in Sardinia using GIS methods

Tavakoli, Saman January 2009 (has links)
<p><p>Abstract</p><p>3D visualization of geological structures is a very efficient way to create a good understanding of geological features. It is not only an illustrative way for common people, but also a comprehensive method to interpret results of the work. Geologists, geophysics engineers and GIS experts sometimes need to visualize an area to accomplish their researches. It can show how sample data are distributed over the area and therefore they can be applied as suitable approach to validate the result. Among different 3D modeling methods, some are expensive or complicated. Therefore, such a methodology enabling easy and cheap creation of a 3D construction is highly demanded.</p><p>However, several obstacles have been faced during the process of constructing a 3D model of geology. The main debate over suitable interpolation methods is the fact that 3D modelers may face discrepancies leading to different results even when they are working with the same set of data. Furthermore, most often part of data can be source of errors, themselves. Hence, it is extremely important to decide whether to omit those data or adopt another strategy. However, even after considering all these points, still the work may not be accurate enough to be used for scientific researches if the interpretation of work is not done precisely. This research sought to explain an approach for 3D modeling of Sedini platform in Sardinia, Italy. GIS was used as a flexible software together with Surfer and Voxler. Data manipulation, geodatabase creation and interpolation test all have been done with aid of GIS. A variety of interpolation methods available in Surfer were used to opt suitable method together with Arc view.</p><p>A solid 3D model is created in Voxler environment. In Voxler, in contrary to many other 3D types of software there are four components needed to construct 3D. C value as 4<sup>th</sup> component except for XYZ coordinates was used to differentiate special features in platform and do gridding based on chosen value. With the aid of C value, one can mark layer of interest to identify it from other layers.</p><p>The final result shows a 3D solid model of the Sedini platform including both surfaces and subsurfaces. An Isosurface with its unique value (Isovalue) can mark layer of interest and make it easy to interpret the results. However, the errors in some parts of model are also noticeable. Since data acquisition was done for studying geology and mineralogy characteristics of the area, there is less number of data points collected per volume according to the main goals of the initial project. Moreover, in some parts of geological border lines, the density of sample points is not high enough to estimate accurate location of lines.</p><p>The study result can be applicable in a broad range of geological studies. Resource evaluation, geomorphology, structural geology and GIS are only a few examples of its application. The results of the study can be compared to the results of similar works where different softwares have been used so as to comprehend pros and cons of each as well as appropriate application of each software for a special task.</p><p> </p><p> </p><p><em>Keywords: GIS, Image Interpretation, Geodatabase, Geology, Interpolation, 3D Modeling</em></p><p> </p><p> </p></p><p> </p>
396

Implementation and Validation of Independent Vector Analysis

Claesson, Kenji January 2010 (has links)
<p>This Master’s Thesis was part of the project called Multimodalanalysis at the Depart-ment of Biomedical Engineering and Informatics at the Ume˚ University Hospital inUme˚ Sweden. The aim of the project is to develop multivariate measurement anda,analysis methods of the skeletal muscle physiology. One of the methods used to scanthe muscle is functional ultrasound. In a study performed by the project group datawas aquired, where test subjects were instructed to follow a certain exercise scheme,which was measured. Since there currently is no superior method to analyze the result-ing data (in form of ultrasound video sequences) several methods are being looked at.One considered method is called Independent Vector Analysis (IVA). IVA is a statisticalmethod to find independent components in a mix of components. This Master’s Thesisis about segmenting and analyzing the ultrasound images with help of IVA, to validateif it is a suitable method for this kind of tasks.First the algorithm was tested on generated mixed data to find out how well itperformed. The results were very accurate, considering that the method only usesapproximations. Some expected variation from the true value occured though.When the algorithm was considered performing to satisfactory, it was tested on thedata gathered by the study and the result can very well reflect an approximation of truesolution, since the resulting segmented signals seem to move in a possible way. But themethod has weak sides (which have been tried to be minimized) and all error analysishas been done by human eye, which definitly is a week point. But for the time being itis more important to analyze trends in the signals, rather than analyze exact numbers.So as long as the signals behave in a realistic way the result can not be said to becompletley wrong. So the overall results of the method were deemed adequate for the application at hand.</p> / Multimodalanalys
397

Segmentation and Visualisation of Human Brain Structures

Hult, Roger January 2003 (has links)
<p>In this thesis the focus is mainly on the development of segmentation techniques for human brain structures and of the visualisation of such structures. The images of the brain are both anatomical images (magnet resonance imaging (MRI) and autoradigraphy) and functional images that show blood flow (functional magnetic imaging (fMRI), positron emission tomography (PET), and single photon emission tomograpy (SPECT)). When working with anatomical images, the structures segmented are visible as different parts of the brain, e.g. the brain cortex, the hippocampus, or the amygdala. In functional images, the activity or the blood flow that be seen.</p><p>Grey-level morphology methods are used in the segmentations to make tissue types in the images more homogenous and minimise difficulties with connections to outside tissue. A method for automatic histogram thresholding is also used. Furthermore, there are binary operations such as logic operation between masks and binary morphology operations.</p><p>The visualisation of the segmented structures uses either surface rendering or volume rendering. For the visualisation of thin structures, surface rendering is the better choice since otherwise some voxels might be missed. It is possible to display activation from a functional image on the surface of a segmented cortex. </p><p>A new method for autoradiographic images has been developed, which integrates registration, background compensation, and automatic thresholding to getfaster and more realible results than the standard techniques give.</p>
398

Automated object-based change detection for forest monitoring by satellite remote sensing : applications in temperate and tropical regions

Desclée, Baudouin 30 May 2007 (has links)
Forest ecosystems have recently received worldwide attention due to their biological diversity and their major role in the global carbon balance. Detecting forest cover change is crucial for reporting forest status and assessing the evolution of forested areas. However, existing change detection approaches based on satellite remote sensing are not quite appropriate to rapidly process the large volume of earth observation data. Recent advances in image segmentation have led to new opportunities for a new object-based monitoring system. <br> <br> This thesis aims at developing and evaluating an automated object-based change detection method dedicated to high spatial resolution satellite images for identifying and mapping forest cover changes in different ecosystems. This research characterized the spectral reflectance dynamics of temperate forest stand cycle and found the use of several spectral bands better for the detection of forest cover changes than with any single band or vegetation index over different time periods. Combining multi-date image segmentation, image differencing and a dedicated statistical procedure of multivariate iterative trimming, an automated change detection algorithm was developed. This process has been further generalized in order to automatically derive an up-to-date forest mask and detect various deforestation patterns in tropical environment.<br> <br> Forest cover changes were detected with very high performances (>90 %) using 3 SPOT-HRVIR images over temperate forests. Furthermore, the overall results were better than for a pixel-based method. Overall accuracies ranging from 79 to 87% were achieved using SPOT-HRVIR and Landsat ETM imagery for identifying deforestation for two different case studies in the Virunga National Park (DRCongo). Last but not least, a new multi-scale mapping solution has been designed to represent change processes using spatially-explicit maps, i.e. deforestation rate maps. By successfully applying these complementary conceptual developments, a significant step has been done toward an operational system for monitoring forest in various ecosystems.
399

The Analysis of Visual Motion: From Computational Theory to Neuronal Mechanisms

Hildreth, Ellen C., Koch, Christof 01 December 1986 (has links)
This paper reviews a number of aspects of visual motion analysis in biological systems from a computational perspective. We illustrate the kinds of insights that have been gained through computational studies and how these observations can be integrated with experimental studies from psychology and the neurosciences to understand the particular computations used by biological systems to analyze motion. The particular areas of motion analysis that we discuss include early motion detection and measurement, the optical flow computation, motion correspondence, the detection of motion discontinuities, and the recovery of three-dimensional structure from motion.
400

The Incremental Rigidity Scheme for Recovering Structure from Motion: Position vs. Velocity Based Formulations

Grzywacz, Norberto M., Hildreth, Ellen C. 01 October 1985 (has links)
Perceptual studies suggest that the visual system uses the "rigidity" assumption to recover three dimensional structures from motion. Ullman (1984) recently proposed a computational scheme, the incremental rigidity scheme, which uses the rigidity assumptions to recover the structure of rigid and non-rigid objects in motion. The scheme assumes the input to be discrete positions of elements in motion, under orthographic projection. We present formulations of Ullmans' method that use velocity information and perspective projection in the recovery of structure. Theoretical and computer analyses show that the velocity based formulations provide a rough estimate of structure quickly, but are not robust over an extended time period. The stable long term recovery of structure requires disparate views of moving objects. Our analysis raises interesting questions regarding the recovery of structure from motion in the human visual system.

Page generated in 0.457 seconds