• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 62
  • 29
  • 9
  • 8
  • 5
  • 2
  • 2
  • 2
  • 2
  • 1
  • Tagged with
  • 148
  • 148
  • 65
  • 32
  • 28
  • 27
  • 26
  • 24
  • 22
  • 20
  • 20
  • 18
  • 16
  • 15
  • 14
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Level Set Segmentation and Volume Visualization of Vascular Trees

Läthén, Gunnar January 2013 (has links)
Medical imaging is an important part of the clinical workflow. With the increasing amount and complexity of image data comes the need for automatic (or semi-automatic) analysis methods which aid the physician in the exploration of the data. One specific imaging technique is angiography, in which the blood vessels are imaged using an injected contrast agent which increases the contrast between blood and surrounding tissue. In these images, the blood vessels can be viewed as tubular structures with varying diameters. Deviations from this structure are signs of disease, such as stenoses introducing reduced blood flow, or aneurysms with a risk of rupture. This thesis focuses on segmentation and visualization of blood vessels, consituting the vascular tree, in angiography images. Segmentation is the problem of partitioning an image into separate regions. There is no general segmentation method which achieves good results for all possible applications. Instead, algorithms use prior knowledge and data models adapted to the problem at hand for good performance. We study blood vessel segmentation based on a two-step approach. First, we model the vessels as a collection of linear structures which are detected using multi-scale filtering techniques. Second, we develop machine-learning based level set segmentation methods to separate the vessels from the background, based on the output of the filtering. In many applications the three-dimensional structure of the vascular tree has to be presented to a radiologist or a member of the medical staff. For this, a visualization technique such as direct volume rendering is often used. In the case of computed tomography angiography one has to take into account that the image depends on both the geometrical structure of the vascular tree and the varying concentration of the injected contrast agent. The visualization should have an easy to understand interpretation for the user, to make diagnostical interpretations reliable. The mapping from the image data to the visualization should therefore closely follow routines that are commonly used by the radiologist. We developed an automatic method which adapts the visualization locally to the contrast agent, revealing a larger portion of the vascular tree while minimizing the manual intervention required from the radiologist. The effectiveness of this method is evaluated in a user study involving radiologists as domain experts.
32

Interaktive Initialisierung eines Echtzeit 3D-Trackings für Augmented Reality auf Smart Devices mit Tiefensensoren

Neges, Matthias, Siewert, Jan Luca 10 December 2016 (has links) (PDF)
Zusammenfassung Heutige Ansätze des 3D-Trackings für die Registrierung in der realen Welt zum Einsatz von Augmented Reality lassen sich in modellbasierte und umgebungsbasierte Verfahren unterteilen. Umgebungsbasierte Verfahren nutzen den SLAM-Algorithmus zur Erzeugung dreidimensionaler Punktwolken der Umgebung in Echtzeit. Modellbasierte Verfahren finden Ihren Ursprung im Canny edge detector und nutzen aus den CAD-Modellen abgeleitete Kantenmodelle. Wird das modellbasierte Verfahren über Kantendetektion und das umgebungsbasierte Verfahren über 3DPunktewolken kombiniert, ergibt sich ein robustes, hybrides 3D-Tracking. Die entsprechenden Algorithmen der verschiedenen Verfahren sind in heute verfügbaren AR-Frameworks bereits implementiert. Der vorliegende Betrag zeigt zwar, welche Effizienz das hybride 3D-Tracking aufweist, jedoch auch die Problematik der erforderlichen geometrischen Ähnlichkeit von idealem CAD-Modell, bzw. Kantenmodell, und realem Objekt. Bei unterschiedlichen Montagestufen an verschiedenen Montagestationen und mit wechselnden Anwendern ist beispielsweise eine erneute Initialisierung erforderlich. Somit bedingt das hybride 3D-Tracking zahlreiche Kantenmodell, die zuvor aus der jeweiligen Montagestufe abgeleitet werden müssen. Hinzu kommen geometrische Abweichungen durch die Fertigung, die je nach Größe der branchenspezifischen Toleranzen keine hinreichend hohe Übereinstimmung mit den abgeleiteten Kantenmodellen aus den idealen CAD-Modellen aufweisen. Die Autoren schlagen daher den Einsatz parametrisch aufgebauter Mastermodelle vor, welche durch eine interaktive Initialisierung geometrisch Instanziiert werden. Zum Einsatz kommt hier ein mobiler Tiefensensor für Smart Devices, welcher mit Hilfe des Anwenders eine Relation der realen geometrischen Merkmale mit den Idealen des CAD-Modells ermöglicht. Des Weiteren wird in dem dargestellten Konzept die Nutzung von speziellen Suchalgorithmen basierend auf geometrischen Ähnlichkeiten vorgeschlagen, sodass eine Registrierung und Instanziierung auch ohne hinterlegtes Mastermodell ermöglicht wird. Der Beitrag fokussiert sich bei der Validierung auf die interaktive Initialisierung anhand eines konkreten anwendungsnahen Beispiels, da die Initialisierung die Grundlage für die weitere Entwicklung des Gesamtkonzeptes darstellt.
33

Three-Dimensional Digital Image Processing And Reconstruction Of Granular Particles

Rivas, Jorge A 26 October 2005 (has links)
This thesis presents a method for digitization of the two-dimensional shape of granular particles by means of photo microscopy and image processing techniques implemented using a software package from Media Cybernetics, Inc: Image-Pro Plus 5.1 and the add-ins Scope-Pro 5.0, SharpStack 5.0 and 3D Constructor 5.0. With the use of these tools, it was possible to implement an efficient semi-automated routine that allows the digitization of large numbers of two-dimensional silhouettes of particles in minimum time, without endangering the quality and reliability of the shapes obtained. Different sample preparation techniques, illumination systems, deconvolution algorithms, mathematical functions, filtering techniques and programming commands are brought into play in order to transform the shape of the two-dimensional projection of particles (captured as a set of successive images acquired at different planes of focus) into a binary format (black and white). At the same time, measurements and statistical information such as grain size distribution can be analyzed from the shapes obtained for a particular granular soil. This information also includes but it is not limited to perimeter, area, diameter (minimum, maximum and mean), caliper (longest, smallest and mean), roundness, aspect ratio and fractal dimension. Results are presented for several sands collected from different places around the world. In addition, some alternatives for three-dimensional shape reconstruction such as X-ray nano tomography and serial sectioning are discussed.
34

Computer Aided Long-Bone Segmentation and Fracture Detection

Donnelley, Martin, martin.donnelley@gmail.com January 2008 (has links)
Medical imaging has advanced at a tremendous rate since x-rays were discovered in 1895. Today, x-ray machines produce extremely high-quality images for radiologists to interpret. However, the methods of interpretation have only recently begun to be augmented by advances in computer technology. Computer aided diagnosis (CAD) systems that guide healthcare professionals to making the correct diagnosis are slowly becoming more prevalent throughout the medical field. Bone fractures are a relatively common occurrence. In most developed countries the number of fractures associated with age-related bone loss is increasing rapidly. Regardless of the treating physician's level of experience, accurate detection and evaluation of musculoskeletal trauma is often problematic. Each year, the presence of many fractures is missed during x-ray diagnosis. For a trauma patient, a mis-diagnosis can lead to ineffective patient management, increased dissatisfaction, and expensive litigation. As a result, detection of long-bone fractures is an important orthopaedic and radiologic problem, and it is proposed that a novel CAD system could help lower the miss rate. This thesis examines the development of such a system, for the detection of long-bone fractures. A number of image processing software algorithms useful for automating the fracture detection process have been created. The first algorithm is a non-linear scale-space smoothing technique that allows edge information to be extracted from the x-ray image. The degree of smoothing is controlled by the scale parameter, and allows the amount of image detail that should be retained to be adjusted for each stage of the analysis. The result is demonstrated to be superior to the Canny edge detection algorithm. The second utilises the edge information to determine a set of parameters that approximate the shaft of the long-bone. This is achieved using a modified Hough Transform, and specially designed peak and line endpoint detectors. The third stage uses the shaft approximation data to locate the bone centre-lines and then perform diaphysis segmentation to separate the diaphysis from the epiphyses. Two segmentation algorithms are presented and one is shown to not only produce better results, but also be suitable for application to all long-bone images. The final stage applies a gradient based fracture detection algorithm to the segmented regions. This algorithm utilises a tool called the gradient composite measure to identify abnormal regions, including fractures, within the image. These regions are then identified and highlighted if they are deemed to be part of a fracture. A database of fracture images from trauma patients was collected from the emergency department at the Flinders Medical Centre. From this complete set of images, a development set and test set were created. Experiments on the test set show that diaphysis segmentation and fracture detection are both performed with an accuracy of 83%. Therefore these tools can consistently identify the boundaries between the bone segments, and then accurately highlight midshaft long-bone fractures within the marked diaphysis. Two of the algorithms---the non-linear smoothing and Hough Transform---are relatively slow to compute. Methods of decreasing the diagnosis time were investigated, and a set of parallelised algorithms were designed. These algorithms significantly reduced the total calculation time, making use of the algorithm much more feasible. The thesis concludes with an outline of future research and proposed techniques that---along with the methods and results presented---will improve CAD systems for fracture detection, resulting in more accurate diagnosis of fractures, and a reduction of the fracture miss rate.
35

Automatic landmark detection on Trochanter Minor in x-ray images / Automatisk landmärkesdetektering på Trochanter Minor i röntgenbilder

Holm, Per January 2005 (has links)
<p>During pre-operative planning for hip replacement, the choice of prosthesis can be aided by measurements in x-ray images of the hip. Some measurements can be done automatically but this require robust and precise image processing algorithms which can detect anatomical features. The Trochanter minor is an important landmark on the femoral shaft. In this thesis, three di.erent image processing algorithms are explained and tested for automatic landmark detection on Trochanter minor. The algorithms handled are Active Shape Models, Shortest path algorithm and a segmentation technique based on cumulated cost maps. The results indicate that cumulated cost maps are an e.ective tool for rough segmentation of the Trochanter Minor. A snake algorithm was then applied which could .nd the edge of the Trochanter minor in all images used in the test. The edge can be used to locate a curvature extremum which can be used as a landmark point.</p>
36

Direction estimation on 3D-tomography images of jawbones

Mazeyev, Yuri January 2008 (has links)
<p>The present work expose a technique of estimation of optimal direction for placing dental implant. A volumetric computed tomography (CT) scan is used as a help of the following searches. The work offers criteria of the optimal implant placement direction and methods of evaluation on direction’s significance. The technique utilizes structure tensor to find a normal to the jawbone surface. Direction of that normal is then used as initial direction for search of optimal direction.</p><p>The technique described in the present work aimed to support doctor’s decisions during dental implantation treatment.</p>
37

Finding Junctions Using the Image Gradient

Beymer, David J. 01 December 1991 (has links)
Junctions are the intersection points of three or more intensity surfaces in an image. An analysis of zero crossings and the gradient near junctions demonstrates that gradient-based edge detection schemes fragment edges at junctions. This fragmentation is caused by the intrinsic pairing of zero crossings and a destructive interference of edge gradients at junctions. Using the previous gradient analysis, we propose a junction detector that finds junctions in edge maps by following gradient ridges and using the minimum direction of saddle points in the gradient. The junction detector is demonstrated on real imagery and previous approaches to junction detection are discussed.
38

Automatic landmark detection on Trochanter Minor in x-ray images / Automatisk landmärkesdetektering på Trochanter Minor i röntgenbilder

Holm, Per January 2005 (has links)
During pre-operative planning for hip replacement, the choice of prosthesis can be aided by measurements in x-ray images of the hip. Some measurements can be done automatically but this require robust and precise image processing algorithms which can detect anatomical features. The Trochanter minor is an important landmark on the femoral shaft. In this thesis, three di.erent image processing algorithms are explained and tested for automatic landmark detection on Trochanter minor. The algorithms handled are Active Shape Models, Shortest path algorithm and a segmentation technique based on cumulated cost maps. The results indicate that cumulated cost maps are an e.ective tool for rough segmentation of the Trochanter Minor. A snake algorithm was then applied which could .nd the edge of the Trochanter minor in all images used in the test. The edge can be used to locate a curvature extremum which can be used as a landmark point.
39

Edge Detection based on Grayscale Morphology on Hexagonal Images

Tsai, Wei-cheng 29 August 2012 (has links)
This study focuses on hexagonally sampled images and grayscale morphology. We combine hexagonal image processing and grayscale morphology to develop hexagonal grayscale morphology, and propose an algorithm to detect and enhance edges. Hexagonal image processing consists of three important steps: conversion of hexagonally sampled images, processing, and display of processed images on simulated hexagonal grid. We construct four different sizes of hexagonal structuring elements to apply morphological operations on hexagonal images. In this study, we applied morphological gradient for edge detection and proposed algorithm for edge enhancement. Moreover, we developed six different shapes of structuring elements to find an optimum one. Finally, we assessed two methods to compare our results, and identified the best result and optimum structuring element. We expect that proposed algorithm will offer a useful tool of image processing on hexagonally sampled images.
40

Applying Point-Based Principal Component Analysis on Orca Whistle Detection

Wang, Chiao-mei 23 July 2007 (has links)
For many undersea research application scenarios, instruments need to be deployed for more than one month which is the basic time interval for many phenomena. With limited power supply and memory, management strategies are crucial for the success of data collection. For acoustic recording of undersea activities, in general,either preprogrammed duty cycle is configured to log partial time series,or spectrogram of signal is derived and stored,to utilize the available memory storage efficiently.To overcome this limitation, we come up with an algorithm to classify different and store only the sound data of interest. Features like characteristic frequencies, large amplitude of selected frequencies or intensity threshold are used to identify or classify different patterns. On main limitation for this type of approaches is that the algorithm is generally range-dependent, as a result, also sound-level-dependent. This type of algorithms will be less robust to the change of the environment.One the other hand, one interesting observation is that when human beings look at the spectrogram, they will immediately tell the difference between two patterns. Even though no knowledge about the nature of the source, human beings still can discern the tiny dissimilarity and group them accordingly. This suggests that the recognition and classification can be done in spectrogram as a recognition problem. In this work, we propose to modify Principal Component Analysis by generating feature points from moment invariant and sound Level variance, to classify sounds of interest in the ocean. Among all different sound sources in the ocean, we focus on three categories of our interest, i.e., rain, ship and whale and dolphin. The sound data were recorded with the Passive Acoustic Listener developed by Nystuen, Applied Physics Lab, University of Washington. Among all the data, we manually identify twenty frames for each cases, and use them as the base training set. Feed several unknown clips for classification experiments, we suggest that both point-based feature extraction are effective ways to describe whistle vocalizations and believe that this algorithm would be useful for extracting features from noisy recordings of the callings of a wide variety of species.

Page generated in 0.1117 seconds