• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 63
  • 29
  • 9
  • 8
  • 5
  • 2
  • 2
  • 2
  • 2
  • 1
  • Tagged with
  • 148
  • 148
  • 65
  • 32
  • 28
  • 27
  • 26
  • 24
  • 22
  • 20
  • 20
  • 18
  • 16
  • 15
  • 14
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
121

Segmentation of Carotid Arteries from 3D and 4D Ultrasound Images / Segmentering av halsartärer från 3D och 4D ultraljudsbilder

Mattsson, Per, Eriksson, Andreas January 2002 (has links)
This thesis presents a 3D semi-automatic segmentation technique for extracting the lumen surface of the Carotid arteries including the bifurcation from 3D and 4D ultrasound examinations. Ultrasound images are inherently noisy. Therefore, to aid the inspection of the acquired data an adaptive edge preserving filtering technique is used to reduce the general high noise level. The segmentation process starts with edge detection with a recursive and separable 3D Monga-Deriche-Canny operator. To reduce the computation time needed for the segmentation process, a seeded region growing technique is used to make an initial model of the artery. The final segmentation is based on the inflatable balloon model, which deforms the initial model to fit the ultrasound data. The balloon model is implemented with the finite element method. The segmentation technique produces 3D models that are intended as pre-planning tools for surgeons. The results from a healthy person are satisfactory and the results from a patient with stenosis seem rather promising. A novel 4D model of wall motion of the Carotid vessels has also been obtained. From this model, 3D compliance measures can easily be obtained.
122

Segmentation of Carotid Arteries from 3D and 4D Ultrasound Images / Segmentering av halsartärer från 3D och 4D ultraljudsbilder

Mattsson, Per, Eriksson, Andreas January 2002 (has links)
<p>This thesis presents a 3D semi-automatic segmentation technique for extracting the lumen surface of the Carotid arteries including the bifurcation from 3D and 4D ultrasound examinations. </p><p>Ultrasound images are inherently noisy. Therefore, to aid the inspection of the acquired data an adaptive edge preserving filtering technique is used to reduce the general high noise level. The segmentation process starts with edge detection with a recursive and separable 3D Monga-Deriche-Canny operator. To reduce the computation time needed for the segmentation process, a seeded region growing technique is used to make an initial model of the artery. The final segmentation is based on the inflatable balloon model, which deforms the initial model to fit the ultrasound data. The balloon model is implemented with the finite element method. </p><p>The segmentation technique produces 3D models that are intended as pre-planning tools for surgeons. The results from a healthy person are satisfactory and the results from a patient with stenosis seem rather promising. A novel 4D model of wall motion of the Carotid vessels has also been obtained. From this model, 3D compliance measures can easily be obtained.</p>
123

Größenanalyse an nicht separierten Holzpartikeln mit regionenbildenden Algorithmen am Beispiel von OSB-Strands / Size analysis of unseparated wood particles with region-based algorithms using the example of OSB strands

Plinke, Burkhard 12 November 2012 (has links) (PDF)
Bei strukturorientierten, aus relativ großen Holzpartikeln aufgebauten Holzwerkstoffen wie z.B. OSB (oriented strand board) addieren sich die gerichteten Festigkeiten der einzelnen Lagen je nach Orientierung der Partikel und der Verteilung ihrer Größenparameter. Wünschenswert wäre eine Messung der Partikelgeometrie und Orientierung möglichst im Prozess, z.B. am Formstrang vor der Presse direkt durch den „Blick auf das Vlies“. Bisher sind regelmäßige on-line-Messungen der Spangeometrie aber nicht möglich, und Einzelspanmessungen werden nicht vorgenommen, weil sie zu aufwändig wären. Um die Partikelkonturen zunächst hinreichend für die Vermessung zu restaurieren und dann zu vermessen, muss ein mehrstufiges Verfahren angewendet werden, das eine Szene mit Strands und mehr oder weniger deutlichen Kanten zunächst als „Grauwertgebirge“ auffasst. Zur Segmentierung reicht ein Watershed-Algorithmus nicht aus. Auch ein zweistufiger Kantendetektor nach Canny liefert allein noch kein ausreichendes Ergebnis, weil sich keine geschlossenen Objektkonturen ergeben. Hinreichend dagegen ist ein komplexes Verfahren auf der Grundlage der Höhenschichtzerlegung und nachfolgenden Synthese: Nach einer Transformation der Grauwerte des Bildes in eine reduzierte, gleichverteilte Anzahl von Höhenschichten werden zwischen diesen die lokalen morphologischen Gradienten berechnet und herangezogen für die Rekonstruktion der ursprünglichen Spankonturen. Diese werden aus den Höhenschichten aufaddiert, wobei allerdings nur Teilflächen innerhalb eines für die gesuchten Spangrößen plausiblen Größenintervalls einbezogen werden, um Störungen zu unterdrücken. Das Ergebnis der Rekonstruktion wird zusätzlich verknüpft mit den bereits durch einen Canny-Operator im Originalbild detektierten deutlichen Kanten und morphologisch bereinigt. Diese erweiterte Höhenschichtanalyse ergibt ausreichend segmentierte Bilder, in denen die Objektgrenzen weitgehend den Spankonturen entsprechen. Bei der nachfolgenden Vermessung der Objekte werden Standard-Algorithmen eingesetzt, wobei sich die Approximation von Spankonturen durch momentengleiche Ellipsen als sinnvoll erwies. Verbliebene Fehldetektionen können bei der Vermessung unterdrückt werden durch Formfaktoren und zusätzliche Größenintervalle. Zur Darstellung und Charakterisierung der Größenverteilungen für die Länge und die Breite wurden die nach der Objektfläche gewichtete, linear skalierte Verteilungsdichte (q2-Verteilung), die Verteilungssumme und verschiedene Quantile verwendet. Zur Umsetzung und Demonstration des Zusammenwirkens der verschiedenen Algorithmen wurde auf der Basis von MATLAB das Demonstrationsprogramm „SizeBulk“ entwickelt, das Bildfolgen verarbeiten kann und mit dem die verschiedenen Varianten der Bildaufbereitung und Parametrierung durchgespielt werden können. Das Ergebnis des Detektionsverfahrens enthält allerdings nur die vollständigen Konturen der ganz oben liegenden Objekte; Objekte unterhalb der Außenlage sind teilweise verdeckt und können daher nur unvollständig vermessen werden. Zum Test wurden daher synthetische Bilder mit vereinzelten und überlagerten Objekten bekannter Größenverteilung erzeugt und dem Detektions- und Messverfahren unterworfen. Dabei zeigte sich, dass die Größenstatistiken durch den Überlagerungseffekt und auch die Spanorientierung zwar beeinflusst werden, dass aber zumindest die Modalwerte der wichtigsten Größenparameter Länge und Breite meist erkennbar bleiben. Als Versuchsmaterial dienten außer den synthetischen Bildern verschiedene Sortimente von OSB-Strands aus Industrie- und Laborproduktion. Sie wurden sowohl manuell vereinzelt als auch zu einem Vlies arrangiert vermessen. Auch bei realen Strands zeigten sich gleiche Einflüsse der Überlagerung auf die Größenverteilungen wie in der Simulation. Es gilt aber auch hier, dass die Charakteristika verschiedener Spankontingente bei gleichen Aufnahmebedingungen und Auswerteparametern gut messbar sind bzw. dass Änderungen in der gemessenen Größenverteilung eindeutig den geometrischen Eigenschaften der Späne zugeordnet werden können. Die Eignung der Verarbeitungsfolge zur Charakterisierung von Spangrößenverteilungen bestätigte sich auch an Bildern, die ausschließlich am Vlies auf einem Formstrang aufgenommen wurden. Zusätzlich wurde nachgewiesen, dass mit der erweiterten Höhenschichtanalyse auch Bilder von Spanplattenoberflächen ausgewertet werden könnten und daraus auf die Größenverteilung der eingesetzten Deckschichtspäne geschlossen werden kann. Das vorgestellte Verfahren ist daher eine gute und neuartige Möglichkeit, prozessnah an Teilflächen von OSB-Vliesen anhand von Grauwertbildern die Größenverteilungen der Strands zu charakterisieren und eignet sich grundsätzlich für den industriellen Einsatz. Geeignete Verfahren waren zumindest für Holzpartikel bisher nicht bekannt. Diese Möglichkeit, Trends in der Spangrößenverteilung automatisch zu erkennen, eröffnet daher neue Perspektiven für die Prozessüberwachung. / The strength of wood-based materials made of several layers of big and oriented particles like OSB (oriented strand board) is a superposition of the strengths of the layers according to the orientation of the particles and depending from their size distribution. It would be desirable to measure particle geometry and orientation close to the production process, e.g. with a “view onto the mat”. Currently, continuous on-line measurements of the particle geometry are not possible, while measurements of separated particles would be too costly and time-consuming. Before measuring particle shapes they have to be reconstructed in a multi-stage procedure which considers an image scene with strands as “gray value mountains”. Segmentation using a watershed algorithm is not sufficient. Also a two-step edge detector according to Canny does not yield closed object shapes. A multi-step procedure based on threshold decomposition and recombination however is successful: The gray values in the image are transformed into a reduced and uniformly distributed set of threshold levels. The local morphological gradients between these levels are used to re-build the original particle shapes by adding the threshold levels. Only shapes with a plausible size corresponding to real particle shapes are included in order to suppress noise. The result of the reconstruction from threshold levels is then matched with the result of the strong edges in the original image, which had been detected using a Canny operator, and is finally cleaned with morphological operators. This extended threshold analysis produces sufficiently segmented images with object shapes corresponding extensively to the particle shapes. Standard algorithms are used to measure geometric features of the objects. An approximation of particle shapes with ellipses of equal moments of inertia is useful. Remaining incorrectly detected objects are removed by form factors and size intervals. Size distributions for the parameters length and width are presented and characterized as density distribution histograms, weighted by the object area and linearly scaled (q2 distribution), as well as the cumulated distribution and different quantiles. A demonstration software “SizeBulk” based on MATLAB has been developed to demonstrate the computation and the interaction of algorithms. Image sequences can be processed and different variations of image preprocessing and parametrization can be tested. However, the detection procedure yields complete shapes only for those particles in the top layer. Objects in lower layers are partially hidden and cannot be measured completely. Artificial images with separated and with overlaid objects with a known size distribution were generated to study this effect. It was shown that size distributions are influenced by this covering effect and also by the strand orientation, but that at least the modes of the most important size parameters length and width remain in evidence. Artificial images and several samples with OSB strands from industrial and laboratory production were used for testing. They were measured as single strands as well as arrangements similar to an OSB mat. For real strands, the same covering effects to the size distributions revealed as in the simulation. Under stable image acquisition conditions and using similar processing parameters the characteristics of these samples can well be measured, and changes in the size distributions are definitely due to the geometric properties of the strands. The suitability of the processing procedure for the characterization of strand size distributions could also be confirmed for images acquired from OSB mats in a production line. Moreover, it could be shown that the extended threshold analysis is also suitable to evaluate images of particle board surfaces and to draw conclusions about the size distribution of the top layer particles. Therefore, the method presented here is a novel possibility to measure size distributions of OSB strands through the evaluation of partial gray value images of the mat surface. In principle, this method is suitable to be transferred to an industrial application. So far, methods that address the problem of detecting trends of the strand size distribution were not known, and this work shows new perspectives for process monitoring.
124

An intuitive motion-based input model for mobile devices

Richards, Mark Andrew January 2006 (has links)
Traditional methods of input on mobile devices are cumbersome and difficult to use. Devices have become smaller, while their operating systems have become more complex, to the extent that they are approaching the level of functionality found on desktop computer operating systems. The buttons and toggle-sticks currently employed by mobile devices are a relatively poor replacement for the keyboard and mouse style user interfaces used on their desktop computer counterparts. For example, when looking at a screen image on a device, we should be able to move the device to the left to indicate we wish the image to be panned in the same direction. This research investigates a new input model based on the natural hand motions and reactions of users. The model developed by this work uses the generic embedded video cameras available on almost all current-generation mobile devices to determine how the device is being moved and maps this movement to an appropriate action. Surveys using mobile devices were undertaken to determine both the appropriateness and efficacy of such a model as well as to collect the foundational data with which to build the model. Direct mappings between motions and inputs were achieved by analysing users' motions and reactions in response to different tasks. Upon the framework being completed, a proof of concept was created upon the Windows Mobile Platform. This proof of concept leverages both DirectShow and Direct3D to track objects in the video stream, maps these objects to a three-dimensional plane, and determines device movements from this data. This input model holds the promise of being a simpler and more intuitive method for users to interact with their mobile devices, and has the added advantage that no hardware additions or modifications are required the existing mobile devices.
125

On-line C-arm intrinsic calibration by means of an accurate method of line detection using the radon transform / On-line C-arm Calibration intrinsèque "On-line" d'un C-arm par une méthode de détection de droite avec la transformée de Radon

Spencer, Benjamin 18 December 2015 (has links)
Les ``C-arm'' sont des systémes de radiologie interventionnelle fréquemment utilisés en salle d'opération ou au lit du patient. Des images 3D des structures anatomiques internes peuvent être calculées à partir de multiples radiographies acquises sur un ``C-arm mobile'' et isocentrique décrivant une trajectoire généralement circulaire autour du patient. Pour cela, la géométrie conique d'acquisition de chaque radiographie doit être précisément connue. Malheureusement, les C-arm se déforment en général au cours de la trajectoire. De plus leur motorisation engendre des oscillations non reproductibles. Ils doivent donc être calibrés au cours de l'acquisition. Ma thèse concerne la calibration intrinsèque d'un C-arm à partir de la détection de la projection du collimateur de la source dans les radiographies.Nous avons développé une méthode de détection de la projection des bords linéaires du collimateur. Elle surpasse les méthodes classiques comme le filtre de Canny sur données simulées ou réelles. La précision que nous obtenons sur l'angle et la position (phi,s) des droites est de l'ordre de: phi{RMS}=+/- 0.0045 degrees et s{RMS}=+/- 1.67 pixels. Nous avons évalué nos méthodes et les avons comparés à des méthodes classiques de calibration dans le cadre de la reconstruction 3D. / Mobile isocentric x-ray C-arm systems are an imaging tool used during a variety of interventional and image guided procedures. Three-dimensional images can be produced from multiple projection images of a patient or object as the C-arm rotates around the isocenter provided the C-arm geometry is known. Due to gravity affects and mechanical instabilities the C-arm source and detector geometry undergo significant non-ideal and possibly non reproducible deformation which requires a process of geometric calibration. This research investigates the use of the projection of the slightly closed x-ray tube collimator edges in the image field of view to provide the online intrinsic calibration of C-arm systems.A method of thick straight edge detection has been developed which outperforms the commonly used Canny filter edge detection technique in both simulation and real data investigations. This edge detection technique has exhibited excellent precision in detection of the edge angles and positions, (phi,s), in the presence of simulated C-arm deformation and image noise: phi{RMS} = +/- 0.0045 degrees and s{RMS} = +/- 1.67 pixels. Following this, the C-arm intrinsic calibration, by means of accurate edge detection, has been evaluated in the framework of 3D image reconstruction.
126

Studies on Kernel Based Edge Detection an Hyper Parameter Selection in Image Restoration and Diffuse Optical Image Reconstruction

Narayana Swamy, Yamuna January 2017 (has links) (PDF)
Computational imaging has been playing an important role in understanding and analysing the captured images. Both image segmentation and restoration has been in-tegral parts of computational imaging. The studies performed in this thesis has been focussed toward developing novel algorithms for image segmentation and restoration. Study related to usage of Morozov Discrepancy Principle in Di use Optical Imaging was also presented here to show that hyper parameter selection could be performed with ease. The Laplacian of Gaussian (LoG) and Canny operators use Gaussian smoothing be-fore applying the derivative operator for edge detection in real images. The LoG kernel was based on second derivative and is highly sensitive to noise when compared to the Canny edge detector. A new edge detection kernel, called as Helmholtz of Gaussian (HoG), which provides higher di suavity is developed in this thesis and it was shown that it is more robust to noise. The formulation of the developed HoG kernel is similar to LoG. It was also shown both theoretically and experimentally that LoG is a special case of HoG. This kernel when used as an edge detector exhibited superior performance compared to LoG, Canny and wavelet based edge detector for the standard test cases both in one- and two-dimensions. The linear inverse problem encountered in restoration of blurred noisy images is typically solved via Tikhonov minimization. The outcome (restored image) of such min-imitation is highly dependent on the choice of regularization parameter. In the absence of prior information about the noise levels in the blurred image, ending this regular-inaction/hyper parameter in an automated way becomes extremely challenging. The available methods like Generalized Cross Validation (GCV) may not yield optimal re-salts in all cases. A novel method that relies on minimal residual method for ending the regularization parameter automatically was proposed here and was systematically compared with the GCV method. It was shown that the proposed method performance was superior to the GCV method in providing high quality restored images in cases where the noise levels are high Di use optical tomography uses near infrared (NIR) light as the probing media to recover the distributions of tissue optical properties with an ability to provide functional information of the tissue under investigation. As NIR light propagation in the tissue is dominated by scattering, the image reconstruction problem (inverse problem) is non-linear and ill-posed, requiring usage of advanced computational methods to compensate this. An automated method for selection of regularization/hyper parameter that incorporates Morozov discrepancy principle(MDP) into the Tikhonov method was proposed and shown to be a promising method for the dynamic Di use Optical Tomography.
127

Multiscale methods in signal processing for adaptive optics / Méthode multi-échelles en traitement du signal pour optique adaptative

Maji, Suman Kumar 14 November 2013 (has links)
Dans cette thèse nous introduisons une approche nouvelle pour la reconstruction d’un front d’ondes en Optique Adaptative (OA), à partir des données de gradients à basse résolution en provenance de l’analyseur de front d’ondes, et en utilisant une approche non-linéaire issue du Formalisme Multiéchelles Mi-crocanonique (FMM). Le FMM est issu de concepts établis en physique statistique, il est naturellement approprié à l’étude des propriétés multiéchelles des signaux naturels complexes, principalement grâce à l’estimation numérique précise des exposants critiques localisés géométriquement, appelés exposants de singularité. Ces exposants quantifient le degré de prédictabilité localement en chaque point du domaine du signal, et ils renseignent sur la dynamique du système associé. Nous montrons qu’une analyse multirésolution opérée sur les exposants de singularité d’une phase turbulente haute résolution (obtenus par modèle ou à partir des données) permet de propager, le long des échelles, les gradients en basse résolution issus de l’analyseur du front d’ondes jusqu’à une résolution plus élevée. Nous comparons nos résultats à ceux obtenus par les approches linéaires, ce qui nous permet de proposer une approche novatrice à la reconstruction de fronts d’onde en Optique Adaptative. / In this thesis, we introduce a new approach to wavefront phase reconstruction in Adaptive Optics (AO) from the low-resolution gradient measurements provided by a wavefront sensor, using a non-linear approach derived from the Microcanonical Multiscale Formalism (MMF). MMF comes from established concepts in statistical physics, it is naturally suited to the study of multiscale properties of complex natural signals, mainly due to the precise numerical estimate of geometrically localized critical exponents, called the singularity exponents. These exponents quantify the degree of predictability, locally, at each point of the signal domain, and they provide information on the dynamics of the associated system. We show that multiresolution analysis carried out on the singularity exponents of a high-resolution turbulent phase (obtained by model or from data) allows a propagation along the scales of the gradients in low-resolution (obtained from the wavefront sensor), to a higher resolution. We compare our results with those obtained by linear approaches, which allows us to offer an innovative approach to wavefront phase reconstruction in Adaptive Optics.
128

Improving performance of non-intrusive load monitoring with low-cost sensor networks / Amélioration des performances de supervision de charges non intrusive à l'aide de capteurs sans fil à faible coût

Le, Xuan-Chien 12 April 2017 (has links)
Dans les maisons et bâtiments intelligents, il devient nécessaire de limiter l'intervention humaine sur le système énergétique, afin de fluctuer automatiquement l'énergie consommée par les appareils consommateurs. Pour cela, un système de mesure de la consommation électrique d'équipements est aussi nécessaire et peut être déployé de deux façons : intrusive ou non-intrusive. La première solution consiste à relever la consommation de chaque appareil, ce qui est inenvisageable à une grande échelle pour des raisons pratiques liées à l'entretien et aux coûts. Donc, la solution non-intrusive (NILM pour Non-Intrusive Load Monitoring), qui est capable d'identifier les différents appareils en se basant sur les signatures extraites d'une consommation globale, est plus prometteuse. Le problème le plus difficile des algorithmes NILM est comment discriminer les appareils qui ont la même caractéristique énergétique. Pour surmonter ce problème, dans cette thèse, nous proposons d'utiliser une information externe pour améliorer la performance des algorithmes existants. Les premières informations additionnelles proposées considèrent l'état précédent de chaque appareil comme la probabilité de transition d'état ou la distance de Hamming entre l'état courant et l'état précédent. Ces informations sont utilisées pour sélectionner l'ensemble le plus approprié des dispositifs actifs parmi toutes les combinaisons possibles. Nous résolvons ce problème de minimisation en norme l1 par un algorithme d'exploration exhaustive. Nous proposons également d'utiliser une autre information externe qui est la probabilité de fonctionnement de chaque appareil fournie par un réseau de capteurs sans fil (WSN pour Wireless Sensor Network) déployé dans le bâtiment. Ce système baptisé SmartSense, est différent de la solution intrusive car seul un sous-ensemble de tous les dispositifs est surveillé par les capteurs, ce qui rend le système moins intrusif. Trois approches sont appliquées dans le système SmartSense. La première approche applique une détection de changements de niveau sur le signal global de puissance consommé et les compare avec ceux existants pour identifier les dispositifs correspondants. La deuxième approche vise à résoudre le problème de minimisation en norme l1 avec les algorithmes heuristiques de composition Paréto-algébrique et de programmation dynamique. Les résultats de simulation montrent que la performance des algorithmes proposés augmente significativement avec la probabilité d'opération des dispositifs surveillés par le WSN. Comme il n'y a qu'un sous-ensemble de tous les appareils qui sont surveillés par les capteurs, ceux qui sont sélectionnés doivent satisfaire quelques critères tels qu'un taux d'utilisation élevé ou des confusions dans les signatures sélectionnées avec celles des autres. / In smart homes, human intervention in the energy system needs to be eliminated as much as possible and an energy management system is required to automatically fluctuate the power consumption of the electrical devices. To design such system, a load monitoring system is necessary to be deployed in two ways: intrusive or non-intrusive. The intrusive approach requires a high deployment cost and too much technical intervention in the power supply. Therefore, the Non-Intrusive Load Monitoring (NILM) approach, in which the operation of a device can be detected based on the features extracted from the aggregate power consumption, is more promising. The difficulty of any NILM algorithm is the ambiguity among the devices with the same power characteristics. To overcome this challenge, in this thesis, we propose to use an external information to improve the performance of the existing NILM algorithms. The first proposed additional features relate to the previous state of each device such as state transition probability or the Hamming distance between the current state and the previous state. They are used to select the most suitable set of operating devices among all possible combinations when solving the l1-norm minimization problem of NILM by a brute force algorithm. Besides, we also propose to use another external feature that is the operating probability of each device provided by an additional Wireless Sensor Network (WSN). Different from the intrusive load monitoring, in this so-called SmartSense system, only a subset of all devices is monitored by the sensors, which makes the system quite less intrusive. Two approaches are applied in the SmartSense system. The first approach applies an edge detector to detect the step-changes on the power signal and then compare with the existing library to identify the corresponding devices. Meanwhile, the second approach tries to solve the l1-norm minimization problem in NILM with a compositional Pareto-algebraic heuristic and dynamic programming algorithms. The simulation results show that the performance of the proposed algorithms is significantly improved with the operating probability of the monitored devices provided by the WSN. Because only part of the devices are monitored, the selected ones must satisfy some criteria including high using rate and more confusions on the selected patterns with the others.
129

Spatially Adaptive Analysis and Segmentation of Polarimetric SAR Data

Wang, Wei January 2017 (has links)
In recent years, Polarimetric Synthetic Aperture Radar (PolSAR) has been one of the most important instruments for earth observation, and is increasingly used in various remote sensing applications. Statistical modelling and scattering analysis are two main ways for PolSAR data interpretation, and have been intensively investigated in the past two decades. Moreover, spatial analysis was applied in the analysis of PolSAR data and found to be beneficial to achieve more accurate interpretation results. This thesis focuses on extracting typical spatial information, i.e., edges and regions by exploring the statistical characteristics of PolSAR data. The existing spatial analysing methods are mainly based on the complex Wishart distribution, which well characterizes the inherent statistical features in homogeneous areas. However, the non-Gaussian models can give better representation of the PolSAR statistics, and therefore have the potential to improve the performance of spatial analysis, especially in heterogeneous areas. In addition, the traditional fixed-shape windows cannot accurately estimate the distribution parameter in some complicated areas, leading to the loss of the refined spatial details. Furthermore, many of the existing methods are not spatially adaptive so that the obtained results are promising in some areas whereas unsatisfactory in other areas. Therefore, this thesis is dedicated to extracting spatial information by applying the non-Gaussian statistical models and spatially adaptive strategies. The specific objectives of the thesis include: (1) to develop reliable edge detection method, (2) to develop spatially adaptive superpixel generation method, and (3) to investigate a new framework of region-based segmentation. Automatic edge detection plays a fundamental role in spatial analysis, whereas the performance of classical PolSAR edge detection methods is limited by the fixed-shape windows. Paper 1 investigates an enhanced edge detection method using the proposed directional span-driven adaptive (DSDA) window. The DSDA window has variable sizes and flexible shapes, and can overcome the limitation of fixed-shape windows by adaptively selecting homogeneous samples. The spherically invariant random vector (SIRV) product model is adopted to characterize the PolSAR data, and a span ratio is combined with the SIRV distance to highlight the dissimilarity measure. The experimental results demonstrated that the proposed method can detect not only the obvious edges, but also the tiny and inconspicuous edges in heterogeneous areas. Edge detection and region segmentation are two important aspects of spatial analysis. As to the region segmentation, paper 2 presents an adaptive PolSAR superpixel generation method based on the simple linear iterative clustering (SLIC) framework. In the k-means clustering procedure, multiple cues including polarimetric, spatial, and texture information are considered to measure the distance. Since the constant weighting factor which balances the spectral similarity and spatial proximity may cause over- or under-superpixel segmentation in different areas, the proposed method sets the factor adaptively based on the homogeneity analysis. Then, in heterogeneous areas, the spectral similarity is more significant than the spatial constraint, generating superpixels which better preserved local details and refined structures. Paper 3 investigates another PolSAR superpixel generation method, which is achieved from the global optimization aspect, using the entropy rate method. The distance between neighbouring pixels is calculated based on their corresponding DSDA regions. In addition, the SIRV distance and the Wishart distance are combined together. Therefore, the proposed method makes good use of the entropy rate framework, and also incorporates the merits of the SIRV distance and the Wishart distance. The superpixels are generated in a homogeneity-adaptive manner, resulting in smooth representation of the land covers in homogeneous areas, and well preserved details in heterogeneous areas. / <p>QC 20171123</p>
130

Nalezení známého objektu v sérii digitálních snímků / Finding of known object in a series of digital images

Bednařík, Jan January 2011 (has links)
The aim of the thesis is detection of a known object in series of pictures. Detection is divided into two methods. First method is based on edge and color detection and comparison. Edge detection is based on detection using both Gradient and Laplacian, so on the first-order and the second-order derivative. Sobel operators were used as well as Laplacian of gaussian method. Thresholding is also used as well as autothreshold calculation. There are two variants of color detection considered in the thesis, direct color comparison and detection based on interest color search. The second part of the thesis is based on interested point detection using a modified SURF method to detect a known object in series of pictures.

Page generated in 0.0764 seconds