• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 663
  • 207
  • 62
  • 60
  • 53
  • 45
  • 12
  • 11
  • 7
  • 6
  • 6
  • 6
  • 6
  • 6
  • 6
  • Tagged with
  • 1324
  • 1324
  • 211
  • 205
  • 159
  • 140
  • 139
  • 131
  • 117
  • 116
  • 114
  • 110
  • 110
  • 108
  • 101
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
151

First Response to Emergency Situation in a Smart Environment using a Mobile Robot

Lazzaro, Gloria January 2015 (has links)
In recent years, the increase in the amount of elderly people has gained importance and significance and has become one of the major social challenges for most developed countries. More than one third of elderly fall at least once a year and often are not able to get up again unsupported, especially if they live alone. Smart homes can provide efficient and cost effective solutions, using technologies in order to sense the environment and helping to understand the occurrence of a possible dangerous situation. Robotic assistance is one of the most promising technologies for recognizing a fallen person and helping him/her in case of danger. This dissertation presents two methods, to detect first and then to recognize the presence or non-presence of a human being on the ground. The first method is based on Kinect depth image, thresholding and blob analysis for detecting human presence. While, the second is a GLCM feature-based method, evaluated from two different classifiers, namely Support Vector Machine (SVM) and Artificial Neural Network (ANN) for recognizing human from non-human. Results show that SVM and ANN can classify the presence of a person with 76.5 and 85.6 of accuracy, respectively. This shows that these methods can potentially be used to recognize the presence or absence of fallen human lying on the floor.
152

Digital X-ray analysis for monitoring fracture healing

Dawson, Sarah P. January 2009 (has links)
X-ray based evaluation of different stages of fracture healing is a well established clinical standard. However, several studies have shown plain radiography alone to be an unreliable method to assess healing. The advent of digital X-ray systems provides the potential to perform quantitative analysis on X-ray images without disrupting normal clinical practice. Two aspects were explored in this study. The first was the measurement of mechanical fracture stiffness under four point bending and axial loading. The second was the inclusion of an Aluminium step wedge to provide Aluminium-equivalent thickness calibration information. Mechanical sti ness studies involved the development of equipment to perform four point bending on intra-medullary (IM) nailed tibial fractures, equipment to perform axial loading on conservatively treated humeral fractures, and fracture models to ex- amine the developed systems. Computational procedures to automatically measure the angle and offset occurring at the fracture site by comparing loaded and unloaded X-ray images were developed utilising cross-correlation. The apparatus and procedures were tested using the fracture models both in X-ray and using the Zwick materials testing machine. The four point bending system was applied clinically to a series of IM nailed tibial fracture patients and the axial loading system to two conservatively treated humeral fracture patients. Mechanical stiffness results showed that the apparatus worked well in the clinical radiography environment and was unobtrusive to normal practice. The developed X-ray analysis procedure provided reliable measurements. However, in the case of IM nailed tibial fractures, both angular and displacement movements were too small to be accurately assessed or to provide reliable stiffness measurements. This indicated that this patient group was possibly unsuitable for mechanical stiffness measurements or that higher loads needed to be applied to the fracture site. The case studies of conservatively treated humeral fractures showed potential in detecting movement between loaded and unloaded X-rays and using this to provide sti ness information. Further investigation is required to show that this technique has the potential to aid fracture healing monitoring. Investigation into Aluminium step wedge calibration began with the design of different step wedges and X-ray phantoms. Initial image analysis involved studying the automatic processing applied by a digital Computed Radiography (CR) Fuji sys- tem and modelling of the inhomogeneities in X-ray images as well as investigation into the effect of and correction for scatter, overlying soft tissue and bone thickness. Computational procedures were developed to semi-automatically detect the steps of the step wedge, form an exponential Aluminium step thickness to grey level calibration graph, measure soft tissue and bone thickness, and correct for the heel effect and scatter contributions. Tests were carried out on pre-clinical models and results compared to ash weight and peripheral quantitative computed tomography (pQCT). A clinical study of radial fractures was used to investigate the effectiveness of the step wedge calibration system in monitoring fracture healing changes. Results using the step wedge indicated that the calibration technique was e ective in detecting and correcting for aspects in uencing Aluminium-equivalent thickness measures. With careful processing, useful information was obtained from digital X- rays that included the Aluminium step wedge and these correlated well with existing density measures. The use of the wedge in patient images showed that small increases in Aluminium-equivalent thickness of the fracture site could be detected. This was most useful for intra-patient comparisons throughout the course of healing rather than providing quantitative measurements which were comparable to other density measures. In conclusion, this thesis shows the potential for accurate analysis of digital X- rays to aid the monitoring of healing changes in fracture patients, particularly with application of axial loading and the use of step wedge calibration.
153

Atrics- A New System For Image Acquisition In Dendrochronology

Levanič, Tom 12 1900 (has links)
We developed a new system for image acquisition in dendrochronology called ATRICS. The new system was compared with existing measurement methods. Images derived from the ATRICS program and processed in any of the available programs for automatic tree-ring recognition are of much higher detail than those from flatbed scanners, as optical magnification has many advantages over digital magnification (especially in areas with extremely narrow tree rings). The quality of stitching was tested using visual assessment - no blurred areas were detected between adjacent images and no tree rings were missing because of the stitching procedure. A test for distortion showed no differences between the original and captured square, indicating that the captured images are distortion free. Differences between manual and automatic measurement are statistically insignificant. The processing of very long cores also poses no problems.
154

Interactive 3D Image Analysis for Cranio-Maxillofacial Surgery Planning and Orthopedic Applications

Nysjö, Johan January 2016 (has links)
Modern medical imaging devices are able to generate highly detailed three-dimensional (3D) images of the skeleton. Computerized image processing and analysis methods, combined with real-time volume visualization techniques, can greatly facilitate the interpretation of such images and are increasingly used in surgical planning to aid reconstruction of the skeleton after trauma or disease. Two key challenges are to accurately separate (segment) bone structures or cavities of interest from the rest of the image and to interact with the 3D data in an efficient way. This thesis presents efficient and precise interactive methods for segmenting, visualizing, and analysing 3D computed tomography (CT) images of the skeleton. The methods are validated on real CT datasets and are primarily intended to support planning and evaluation of cranio-maxillofacial (CMF) and orthopedic surgery. Two interactive methods for segmenting the orbit (eye-socket) are introduced. The first method implements a deformable model that is guided and fitted to the orbit via haptic 3D interaction, whereas the second method implements a user-steered volumetric brush that uses distance and gradient information to find exact object boundaries. The thesis also presents a semi-automatic method for measuring 3D angulation changes in wrist fractures. The fractured bone is extracted with interactive mesh segmentation, and the angulation is determined with a technique based on surface registration and RANSAC. Lastly, the thesis presents an interactive and intuitive tool for segmenting individual bones and bone fragments. This type of segmentation is essential for virtual surgery planning, but takes several hours to perform with conventional manual methods. The presented tool combines GPU-accelerated random walks segmentation with direct volume rendering and interactive 3D texture painting to enable quick marking and separation of bone structures. It enables the user to produce an accurate segmentation within a few minutes, thereby removing a major bottleneck in the planning procedure.
155

Bilder i historieundervisningen : Bild som kunskapsförmedlar

Özgurdamar, Deniz January 2017 (has links)
This thesis is based on the interviews I've done with both teachers and students, as I analyzeand evaluate from a historical educational perspective. I present research on the topic and how the image of the teaching may look differently. Pupils' learning process can vary, and in a society where image is a big part of their life is according to me a pity not to use more pictures in the school, especially in the teaching of history. It is not enough that teachers begin to use more image in his teaching, but they use the images in the right way, where students get the chance to learn how to analyze and interpret the image's meaning and history than just tell you what they see.
156

Applications of focal-series data in scanning-transmission electron microscopy

Jones, Lewys January 2013 (has links)
Since its development, the scanning transmission electron microscope has rapidly found uses right across the material sciences. Its use of a finely focussed electron probe rastered across samples offers the microscopist a variety of imaging and spectroscopy signals in parallel. These signals are individually intuitive to interpret, and collectively immensely powerful as a research tool. Unsurprisingly then, much attention is concentrated on the optical quality of the electron probes used. The introduction of multi-pole hardware to correct optical distortions has yielded a step-change in imaging performance; now with spherical and other remnant aberrations greatly reduced, larger probe forming apertures are suddenly available. Probes formed by such apertures exhibit a much improved and routinely sub-Angstrom diffraction-limited resolution, as well as a greatly increased probe current for spectroscopic work. The superb fineness of the electron beams and enormous magnifications now achievable make the STEM instrument one of the most sensitive scientific instruments developed by man, and this thesis will deal with two core issues that suddenly become important in this new aberration-corrected era. With this new found sensitivity comes the risk of imaging-distortion from outside influences such as acoustic or mechanical vibrations. These can corrupt the data in an unsatisfactory manner and counter the natural interpretability of the technique. Methods to identify and diagnose this distortion will be discussed, and a new technique developed to restore the corrupted data presented. Secondly, the subtleties of probe-shape in the multi-pole corrected STEM are extensively evaluated via simulation, with the contrast-transfer capabilities across defocus explored in detail. From this investigation a new technique of STEM focal-series reconstruction (FSR) is developed to compensate for the small remnant aberrations that still persist – recovering the sample object function free from any optical distortion. In both cases the methodologies were developed into automated computer codes and example restorations from the two techniques are shown (separately, although in principal the scan-corrected output is compatible with FSR). The performance of these results has been quantified with respect to several factors including; image resolution, signal-noise ratio, sample-drift, low frequency instability, and quantitative image intensity. The techniques developed are offered as practical tools for the microscopist wishing to push the performance of their instrument just that little bit further.
157

Entropia aplicada ao reconhecimento de padrões em imagens / Entropy applied to pattern recognition in images

Assirati, Lucas 23 July 2014 (has links)
Este trabalho faz um estudo do uso da entropia como ferramenta para o reconhecimento de padrões em imagens. A entropia é um conceito utilizado em termodinâmica para medir o grau de organização de um meio. Entretanto, este conceito pode ser ampliado para outras áreas do conhecimento. A adoção do conceito em Teoria da Informação e, por consequência, em reconhecimento de padrões foi introduzida por Shannon no trabalho intitulado \"A Mathematical Theory of Communication\", publicado no ano de 1948. Neste mestrado, além da entropia clássica de Boltzman-Gibbs-Shannon, são investigadas a entropia generalizada de Tsallis e suas variantes (análise multi-escala, múltiplo índice q e seleção de atributos), aplicadas ao reconhecimento de padrões em imagens. Utilizando bases de dados bem conhecidas na literatura, realizou-se estudos comparativos entre as técnicas. Os resultados mostram que a entropia de Tsallis, através de análise multi-escala e múltiplo índice q, tem grande vantagem sobre a entropia de Boltzman-Gibbs-Shannon. Aplicações práticas deste estudo são propostas com o intuito de demonstrar o potencial do método. / This work studies the use of entropy as a tool for pattern recognition in images. Entropy is a concept used in thermodynamics to measure the degree of organization of a system. However, this concept can be extended to other areas of knowledge. The adoption of the concept in information theory and, consequently, in pattern recognition was introduced by Shannon in the paper entitled \"A Mathematical Theory of Communication\", published in 1948. In this master thesis, the classical Boltzmann-Gibbs-Shannon entropy, the generalized Tsallis entropy and its variants (multi-scale analysis, multiple q index, and feature selection) are studied, applied to pattern recognition in images. Using well known databases, we performed comparative studies between the techniques. The results show that the Tsallis entropy, through multiscale analysis and multiple q index has a great advantage over the classical Boltzmann-Gibbs- Shannon entropy. Practical applications of this study are proposed in order to demonstrate the potential of the method.
158

Deep neural networks in computer vision and biomedical image analysis

Xie, Weidi January 2017 (has links)
This thesis proposes different models for a variety of applications, such as semantic segmentation, in-the-wild face recognition, microscopy cell counting and detection, standardized re-orientation of 3D ultrasound fetal brain and Magnetic Resonance (MR) cardiac video segmentation. Our approach is to employ the large-scale machine learning models, in particular deep neural networks. Expert knowledge is either mathematically modelled as a differentiable hidden layer in the Artificial Neural Networks, or we tried to break the complex tasks into several small and easy-to-solve tasks. Multi-scale contextual information plays an important role in pixel-wise predic- tion, e.g. semantic segmentation. To capture the spatial contextual information, we present a new block for learning receptive field adaptively by within-layer recurrence. While interleaving with the convolutional layers, receptive fields are effectively enlarged, reaching across the entire feature map or image. The new block can be initialized as identity and inserted into any pre-trained networks, therefore taking benefit from the "pre-train and fine-tuning" paradigm. Current face recognition systems are mostly driven by the success of image classification, where the models are trained to by identity classification. We propose a multi-column deep comparator networks for face recognition. The architecture takes two sets (each contains an arbitrary number of faces) of images or frames as inputs, facial part-based (e.g. eyes, noses) representations of each set are pooled out, dynamically calibrated based on the quality of input images, and further compared with local "experts" in a pairwise way. Unlike the computer vision applications, collecting data and annotation is usually more expensive in biomedical image analysis. Therefore, the models that can be trained with fewer data and weaker annotations are of great importance. We approach the microscopy cell counting and detection based on density estimation, where only central dot annotations are needed. The proposed fully convolutional regression networks are first trained on a synthetic dataset of cell nuclei, later fine-tuned and shown to generalize to real data. In 3D fetal ultrasound neurosonography, establishing a coordinate system over the fetal brain serves as a precursor for subsequent tasks, e.g. localization of anatomical landmarks, extraction of standard clinical planes for biometric assessment of fetal growth, etc. To align brain volumes into a common reference coordinate system, we decompose the complex transformation into several simple ones, which can be easily tackled with Convolutional Neural Networks. The model is therefore designed to leverage the closely related tasks by sharing low-level features, and the task-specific predictions are then combined to reproduce the transformation matrix as the desired output. Finally, we address the problem of MR cardiac video analysis, in which we are interested in assisting clinical diagnosis based on the fine-grained segmentation. To facilitate segmentation, we present one end-to-end trainable model that achieves multi-view structure detection, alignment (standardized re-orientation), and fine- grained segmentation simultaneously. This is motivated by the fact that the CNNs in essence is not rotation equivariance or invariance, therefore, adding the pre-alignment into the end-to-end trainable pipeline can effectively decrease the complexity of segmentation for later stages of the model.
159

Advanced Analysis Algorithms for Microscopy Images

He, Siheng January 2015 (has links)
Microscope imaging is a fundamental experimental technique in a number of diverse research fields, especially biomedical research. It begins with basic arithmetic operations that intend to reproduce the information contained in the experimental sample. With the rapid advancement in CCD cameras and microscopes (e.g. STORM, GSD), image processing algorithms that extract information more accurate and faster are highly desirable. The overarching goal of this dissertation is to further improve image analysis algorithms. As most of microscope imaging applications start with fluorescence quantification, first we develop a quantification method for fluorescence of adsorbed proteins on microtubules. Based on the quantified result, the adsorption of streptavidin and neutravidin to biotinylated microtubules is found to exhibit negative cooperativity due to electrostatic interactions and steric hindrance. This behavior is modeled by a newly developed kinetic analogue of the Fowler-Guggenheim adsorption model. The complex adsorption kinetics of streptavidin to biotinylated structures suggests that the nanoscale architecture of binding sites can result in complex binding kinetics and hence needs to be considered when these intermolecular bonds are employed in self-assembly and nanobiotechnology. In the second part, a powerful lock-in algorithm is introduced for image analysis. A classic signal processing algorithm, the lock-in amplifier, was extended to two dimensions (2D) to extract the signal in patterned images. The algorithm was evaluated using simulated image data and experimental microscopy images to extract the fluorescence signal of fluorescently labeled proteins adsorbed on surfaces patterned with chemical vapor deposition (CVD). The algorithm was capable of retrieving the signal with a signal-to-noise ratio (SNR) as low as -20 dB. The methodology holds promise not only for the measurement of adsorption events on patterned surfaces but in all situations where a signal has to be extracted from a noisy background in two or more dimensions. The third part develops an automated software pipeline for image analysis, Fluorescencent Single Molecule Image Analysis (FSMIA). The software is customized especially for single molecule imaging. While processing the microscopy image stacks, it extracts physical parameters (e.g. location, fluorescence intensity) for each molecular object. Furthermore, it connects molecules in different frames into trajectories, facilitating common analysis tasks such as diffusion analysis and residence time analysis, etc. Finally, in the last part, a new algorithm is developed for the localization of imaged objects based on the search of the best-correlated center. This approach yields tracking accuracies that are comparable to those of Gaussian fittings in typical signal-to-noise ratios, but with one order-of-magnitude faster execution. The algorithm is well suited for super-resolution localization microscopy methods since they rely on accurate and fast localization algorithms. The algorithm can be adapted to localize objects that do not exhibit radial symmetry or have to be localized in higher dimensional spaces. Throughout this dissertation, the accuracy, precision and implementation of new image processing algorithms are highlighted. The findings not only further the theory behind digital image processing, but also further enrich the toolbox for microscopy image analysis.
160

Arbitrary shape detection by genetic algorithms.

January 2005 (has links)
Wang Tong. / Thesis submitted in: June 2004. / Thesis (M.Phil.)--Chinese University of Hong Kong, 2005. / Includes bibliographical references (leaves 64-69). / Abstracts in English and Chinese. / ABSTRACT --- p.I / 摘要 --- p.IV / ACKNOWLEDGMENTS --- p.VI / TABLE OF CONTENTS --- p.VIII / LIST OF FIGURES --- p.XIIV / Chapter CHAPTER 1 --- INTRODUCTION --- p.1 / Chapter 1.1 --- Hough Transform --- p.2 / Chapter 1.2 --- Template Matching --- p.3 / Chapter 1.3 --- Genetic Algorithms --- p.4 / Chapter 1.4 --- Outline of the Thesis --- p.6 / Chapter CHAPTER 2 --- HOUGH TRANSFORM AND ITS COMMON VARIANTS --- p.7 / Chapter 2.1 --- Hough Transform --- p.7 / Chapter 2.1.1 --- What is Hough Transform --- p.7 / Chapter 2.1.2 --- Parameter Space --- p.7 / Chapter 2.1.3 --- Accumulator Array --- p.9 / Chapter 2.2 --- Gradient-based Hough Transform --- p.10 / Chapter 2.2.1 --- Direction of Gradient --- p.11 / Chapter 2.2.2 --- Accumulator Array --- p.14 / Chapter 2.2.3 --- Peaks in the accumulator array --- p.16 / Chapter 2.2.4 --- Performance of Gradient-based Hough Transform --- p.18 / Chapter 2.3 --- Generalized Hough Transform (GHT) --- p.19 / Chapter 2.3.1 --- What Is GHT --- p.19 / Chapter 2.3.2 --- R-table of GHT --- p.20 / Chapter 2.3.3 --- GHT Procedure --- p.21 / Chapter 2.3.4 --- Analysis --- p.24 / Chapter 2.4 --- Edge Detection --- p.25 / Chapter 2.4.1 --- Gradient-Based Method --- p.25 / Chapter 2.4.2 --- Laplacian of Gaussian --- p.29 / Chapter 2.4.3 --- Canny edge detection --- p.30 / Chapter CHAPTER 3 --- PROBABILISTIC MODELS --- p.33 / Chapter 3.1 --- Randomized Hough Transform (RHT) --- p.33 / Chapter 3.1.1 --- Basics of the RHT --- p.33 / Chapter 3.1.2 --- RHT algorithm --- p.34 / Chapter 3.1.3 --- Advantage of RHT --- p.37 / Chapter 3.2 --- Genetic Model --- p.37 / Chapter 3.2.1 --- Genetic algorithm mechanism --- p.38 / Chapter 3.2.2 --- A Genetic Algorithm for Primitive Extraction --- p.39 / Chapter CHAPTER 4 --- PROPOSED ARBITRARY SHAPE DETECTION --- p.42 / Chapter 4.1 --- Randomized Generalized Hough Transform --- p.42 / Chapter 4.1.1 --- R-table properties and the general notion of a shape --- p.42 / Chapter 4.1.2 --- Using pairs of edges --- p.44 / Chapter 4.1.3 --- Extend to Arbitrary shapes --- p.46 / Chapter 4.2 --- A Genetic algorithm with the Hausdorff distance --- p.47 / Chapter 4.2.1 --- Hausdorff distance --- p.47 / Chapter 4.2.2 --- Chromosome strings --- p.48 / Chapter 4.2.3 --- Discussion --- p.51 / Chapter CHAPTER 5 --- EXPERIMENTAL RESULTS AND COMPARISONS --- p.52 / Chapter 5.1 --- Primitive extraction --- p.53 / Chapter 5.2 --- Arbitrary Shape Detection --- p.54 / Chapter 5.3 --- Summary of the Experimental Results --- p.60 / Chapter CHAPTER 6 --- CONCLUSIONS --- p.62 / Chapter 6.1 --- Summary --- p.62 / Chapter 6.2 --- Future work --- p.63 / BIBLIOGRAPHY --- p.64

Page generated in 0.0937 seconds