• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1494
  • 473
  • 437
  • 372
  • 104
  • 76
  • 68
  • 34
  • 33
  • 32
  • 28
  • 26
  • 21
  • 19
  • 18
  • Tagged with
  • 3695
  • 1099
  • 755
  • 489
  • 461
  • 456
  • 421
  • 390
  • 389
  • 348
  • 348
  • 328
  • 326
  • 318
  • 317
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
591

Magnetic resonance imaging and magnetic resonance spectroscopy characterize a rodent model of covert stroke

Herrera, Sheryl Lyn 17 December 2012 (has links)
Covert stroke (CS) comprises lesions in the brain often associated by risk factors such as a diet high in fat, salt, cholesterol and sugar (HFSCS). Developing a rodent model for CS incorporating these characteristics is useful for developing and testing interventions. The purpose of this thesis was to determine if magnetic resonance (MR) can detect brain abnormalities to confirm this model will have the desired anatomical effects. Ex vivo MR showed brain abnormalities for rats with the induced lesions and fed the HFSCS diet. Spectra acquired on the fixed livers had an average percent area under the fat peak relative to the water peak of (20±4)% for HFSCS and (2±2)% for control. In vivo MR images had significant differences between surgeries to induce the lesions (p=0.04). These results show that applying MR identified abnormalities in the rat model and therefore is important in the development of this CS rodent model.
592

Segmentation and Beautification of Handwriting using Mobile Devices

Dürebrandt, Jesper January 2015 (has links)
Converting handwritten or machine printed documents into a computer readable format allows more efficient storage and processing. The recognition of machine printed text is very reliable with today's technology, but the recognition of offline handwriting still remains a problem to the research community due to the high variance in handwriting styles. Modern mobile devices are capable of performing complex tasks such as scanning invoices, reading traffic signs, and online handwriting recognition, but there are only a few applications that treat offline handwriting. This thesis investigates the segmentation of handwritten documents into text lines and words, how the legibility of handwriting can be increased by beautification, as well as implementing it for modern mobile devices. Text line and word segmentation are crucial steps towards implementing a complete handwriting recognition system. The results of this thesis show that text line and word segmentation along with handwriting beautification can be implemented successfully for modern mobile devices and a survey concluding that the writing on processed documents is more legible than their unprocessed counterparts. An application for the operating system iOS is developed for demonstration.
593

Texture-boundary detection in real-time

Hidayat, Jefferson Ray Tan January 2010 (has links)
Boundary detection is an essential first-step for many computer vision applications. In practice, boundary detection is difficult because most images contain texture. Normally, texture-boundary detectors are complex, and so cannot run in real-time. On the other hand, the few texture boundary detectors that do run in real-time leave much to be desired in terms of quality. This thesis proposes two real-time texture-boundary detectors – the Variance Ridge Detector and the Texton Ridge Detector – both of which can detect high-quality texture-boundaries in real-time. The Variance Ridge Detector is able to run at 47 frames per second on 320 by 240 images, while scoring an F-measure of 0.62 (out of a theoretical maximum of 0.79) on the Berkeley segmentation dataset. The Texton Ridge Detector runs at 10 frames per second but produces slightly better results, with an F-measure score of 0.63. These objective measurements show that the two proposed texture-boundary detectors outperform all other texture-boundary detectors on either quality or speed. As boundary detection is so widely-used, this development could induce improvements to many real-time computer vision applications.
594

Système de recommandations utilisant une combinaison de filtrage collaboratif et de segmentation pour des données implicites

Renaud-Deputter, Simon January 2013 (has links)
Avec la montée de la technologie et la facilité d'accès à Internet, les utilisateurs sont submergés par un large éventail de choix disponibles et une quantité considérable d'informations [6]. Il devient nécessaire d'avoir accès à des outils et des techniques efficaces pour filtrer les données et de les rendre utilisables pour des opérations de tous les jours. À cette fin, des systèmes de recommandations, qui ont fait l'objet de recherche active et de développement au cours des 15 dernières années, sont maintenant capables de fournir aux utilisateurs des choix [51] sur ce qu'ils aimeraient lire, acheter, regarder, manger, etc. La problématique étudiée dans ce mémoire est l'utilisation d'informations implicites pour construire des systèmes de recommandations en utilisant une approche par filtrage collaboratif. Beaucoup de travaux ont été faits sur l'utilisation de filtrage collaboratif à l'aide d'informations explicites telles que les cotes [48], [43], [19], [33]. Cependant, les techniques développées pour les systèmes de recommandations comprenant des articles sans informations explicites restent rudimentaires. Le plus grand défi vis-à-vis les systèmes de recommandations à informations implicites est l'absence de rétroaction de la part de l'utilisateur si nous n'utilisons pas un expert comme par exemple, un vendeur. En outre, comme il est mentionné dans [51], lorsque qu'un système avec cote existe, la proportion des éléments évalués est souvent inférieure à 1%. Par conséquent, même pour les systèmes de recommandations qui utilisent des informations explicites telles que les cotes, il est crucial d'avoir une méthode qui tire profit des informations implicites. Les progrès dans ce domaine sont timides depuis les dernières années. Il y a eu des études sur les recommandations par rapport aux médias sociaux en se basant sur des utilisateurs et des mots-clés [18], la modélisation probabiliste [30] et la modélisation sémantique basée sur la recommandation de nouvelles [29]. S'il est vrai que ces techniques utilisent des informations implicites, seuls quelques-uns [40], [23] abordent la question de recommander des produits d'un magasin sans l'utilisation d'informations explicites. Ces méthodes nécessitent généralement la disponibilité d'un expert afin de prendre la rétroaction d'un client ou le réglage de nombreux paramètres. Dans notre étude, nous avons réussi à élaborer un algorithme permettant de soumettre des recommandations personnelles à un utilisateur en utilisant seulement des informations implicites. Notre technique, lorsque comparée à un système semblable qui utiliserait des cotes comme informations explicites, génère de très bons résultats. De plus, lorsque la méthode développée est comparée à d'autres systèmes utilisant de l'information implicite, elle offr des résultats qui sont comparables et parfois supérieurs à ceux-ci.
595

Moving Object Detection based on Background Modeling

Luo, Yuanqing January 2014 (has links)
Aim at the moving objects detection, after studying several categories of background modeling methods, we design an improved Vibe algorithm based on image segmentation algorithm. Vibe algorithm builds background model via storing a sample set for each pixel. In order to detect moving objects, it uses several techniques such as fast initialization, random update and classification based on distance between pixel value and its sample set. In our improved algorithm, firstly we use histograms of multiple layers to extract moving objects in block-level in pre-process stage. Secondly we segment the blocks of moving objects via image segmentation algorithm. Then the algorithm constructs region-level information for the moving objects, designs the classification principles for regions and the modification mechanism among neighboring regions. In addition, to solve the problem that the original Vibe algorithm can easily introduce the ghost region into the background model, the improved algorithm designs and implements the fast ghost elimination algorithm. Compared with the tradition pixel-level background modeling methods, the improved method has better  robustness and reliability against the factors like background disturbance, noise and existence of moving objects in the initial stage. Specifically, our algorithm improves the precision rate from 83.17% in the original Vibe algorithm to 95.35%, and recall rate from 81.48% to 90.25%. Considering the affection of shadow to moving objects detection, this paper designs a shadow elimination algorithm based on Red Green and Illumination (RGI) color feature, which can be converted from RGB color space, and dynamic match threshold. The results of experiments demonstrate  that the algorithm can effectively reduce the influence of shadow on the moving objects detection. At last this paper makes a conclusion for the work of this thesis and discusses the future work.
596

3D Segmentation of Cam-Type Pathological Femurs with Morphological Snakes

Telles O'Neill, Gabriel 30 June 2011 (has links)
We introduce a new way to accurately segment the 3D femur from pelvic CT scans. The femur is a difficult target for segmentation due to its proximity to the acetabulum, irregular shape and the varying thickness of its hardened outer shell. Atypical bone morphologies, such as the ones present in hips suffering from Femoral Acetabular Impingements (FAIs) can also provide additional challenges to segmentation. We overcome these difficulties by (a) dividing the femur into the femur head and body regions (b) analysis of the femur-head and neighbouring acetabulum’s composition (c) segmentations with two levels of detail – rough and fine contours. Segmentations of the CT volume are performed iteratively, on a slice-by-slice basis and contours are extracted using the morphological snake algorithm. Our methodology was designed to require little initialization from the user and to deftly handle the large variation in femur shapes, most notably from deformations attributed to cam-type FAIs. Our efforts are to provide physicians with a new tool that creates patient-specific and high-quality 3D femur models while requiring much less time and effort. We tested our methodology on a database of 20 CT volumes acquired at the Ottawa General Hospital during a study into FAIs. We selected 6 CT scans from the database, for a total of 12 femurs, considering wide inter-patient variations. Of the 6 patients, 4 had unilateral cam-type FAIs, 1 had a bilateral cam-type FAI and the last was from a control group. The femurs segmented with our method achieved an average volume overlap error of 2.71 ± 0.44% and an average symmetric surface distance of 0.28 ± 0.04 mm compared against the same, manually segmented femurs. These results are better than all comparable literature and accurate enough to be used to in the creation of patient-specific 3D models.
597

Evaluating Text Segmentation

Fournier, Christopher 24 April 2013 (has links)
This thesis investigates the evaluation of automatic and manual text segmentation. Text segmentation is the process of placing boundaries within text to create segments according to some task-dependent criterion. An example of text segmentation is topical segmentation, which aims to segment a text according to the subjective definition of what constitutes a topic. A number of automatic segmenters have been created to perform this task, and the question that this thesis answers is how to select the best automatic segmenter for such a task. This requires choosing an appropriate segmentation evaluation metric, confirming the reliability of a manual solution, and then finally employing an evaluation methodology that can select the automatic segmenter that best approximates human performance. A variety of comparison methods and metrics exist for comparing segmentations (e.g., WindowDiff, Pk), and all save a few are able to award partial credit for nearly missing a boundary. Those comparison methods that can award partial credit unfortunately lack consistency, symmetricity, intuition, and a host of other desirable qualities. This work proposes a new comparison method named boundary similarity (B) which is based upon a new minimal boundary edit distance to compare two segmentations. Near misses are frequent, even among manual segmenters (as is exemplified by the low inter-coder agreement reported by many segmentation studies). This work adapts some inter-coder agreement coefficients to award partial credit for near misses using the new metric proposed herein, B. The methodologies employed by many works introducing automatic segmenters evaluate them simply in terms of a comparison of their output to one manual segmentation of a text, and often only by presenting nothing other than a series of mean performance values (along with no standard deviation, standard error, or little if any statistical hypothesis testing). This work asserts that one segmentation of a text cannot constitute a “true” segmentation; specifically, one manual segmentation is simply one sample of the population of all possible segmentations of a text and of that subset of desirable segmentations. This work further asserts that an adapted inter-coder agreement statistics proposed herein should be used to determine the reproducibility and reliability of a coding scheme and set of manual codings, and then statistical hypothesis testing using the specific comparison methods and methodologies demonstrated herein should be used to select the best automatic segmenter. This work proposes new segmentation evaluation metrics, adapted inter-coder agreement coefficients, and methodologies. Most important, this work experimentally compares the state-or-the-art comparison methods to those proposed herein upon artificial data that simulates a variety of scenarios and chooses the best one (B). The ability of adapted inter-coder agreement coefficients, based upon B, to discern between various levels of agreement in artificial and natural data sets is then demonstrated. Finally, a contextual evaluation of three automatic segmenters is performed using the state-of-the art comparison methods and B using the methodology proposed herein to demonstrate the benefits and versatility of B as opposed to its counterparts.
598

Speckle Reduction and Lesion Segmentation for Optical Coherence Tomography Images of Teeth

Li, Jialin 10 September 2010 (has links)
The objective of this study is to apply digital image processing (DIP) techniques to optical coherence tomography (OCT) images and develop computer-based non-subjective quantitative analysis, which can be used as diagnostic aids in early detection of dental caries. This study first compares speckle reduction effects on raw OCT image data by implementing spatial-domain and transform-domain speckle filtering. Then region-based contour search and global thresholding techniques examine digital OCT images with possible lesions to identify and highlight the presence of features indicating early stage dental caries. The outputs of these processes, which explore the combination of image restoration and segmentation, can be used to distinguish lesion from normal tissue and determine the characteristics prior to, during, and following treatments. The combination of image processing and analysis techniques in this thesis shows potential of detecting early stage caries lesion successfully.
599

Magnetic resonance imaging and magnetic resonance spectroscopy characterize a rodent model of covert stroke

Herrera, Sheryl Lyn 17 December 2012 (has links)
Covert stroke (CS) comprises lesions in the brain often associated by risk factors such as a diet high in fat, salt, cholesterol and sugar (HFSCS). Developing a rodent model for CS incorporating these characteristics is useful for developing and testing interventions. The purpose of this thesis was to determine if magnetic resonance (MR) can detect brain abnormalities to confirm this model will have the desired anatomical effects. Ex vivo MR showed brain abnormalities for rats with the induced lesions and fed the HFSCS diet. Spectra acquired on the fixed livers had an average percent area under the fat peak relative to the water peak of (20±4)% for HFSCS and (2±2)% for control. In vivo MR images had significant differences between surgeries to induce the lesions (p=0.04). These results show that applying MR identified abnormalities in the rat model and therefore is important in the development of this CS rodent model.
600

Classification of terrain using superpixel segmentation and supervised learning / Klassificering av terräng med superpixelsegmentering och övervakad inlärning

Ringqvist, Sanna January 2014 (has links)
The usage of 3D-modeling is expanding rapidly. Modeling from aerial imagery has become very popular due to its increasing number of both civilian and mili- tary applications like urban planning, navigation and target acquisition. This master thesis project was carried out at Vricon Systems at SAAB. The Vricon system produces high resolution geospatial 3D data based on aerial imagery from manned aircrafts, unmanned aerial vehicles (UAV) and satellites. The aim of this work was to investigate to what degree superpixel segmentation and supervised learning can be applied to a terrain classification problem using imagery and digital surface models (dsm). The aim was also to investigate how the height information from the digital surface model may contribute compared to the information from the grayscale values. The goal was to identify buildings, trees and ground. Another task was to evaluate existing methods, and compare results. The approach for solving the stated goal was divided into several parts. The first part was to segment the image using superpixel segmentation, after that features were extracted. Then the classifiers were created and trained and finally the classifiers were evaluated. The classification method that obtained the best results in this thesis had approx- imately 90 % correctly labeled superpixels. The result was equal, if not better, compared to other solutions available on the market.

Page generated in 0.1199 seconds