571 |
A Direct Approach for the Segmentation of Unorganized Points and Recognition of Simple Algebraic Surfaces / Ein direktes Verfahren zur Segmentierung unstrukturierter Punktdaten und Bestimmung algebraischer OberflächenelementeVanco, Marek 02 June 2003 (has links) (PDF)
In Reverse Engineering a physical object is digitally
reconstructed from a set of boundary points. In the segmentation phase
these points are grouped into subsets to facilitate consecutive steps
as surface fitting. In this thesis we present a segmentation
method with subsequent classification of simple algebraic surfaces.
Our method is direct in the sense that it operates directly on the
point set in contrast to other approaches that are based on a
triangulation of the data set.
The reconstruction process involves a fast algorithm for $k$-nearest
neighbors search and an estimation of first and second order surface
properties. The first order segmentation, that is based on normal
vectors, provides an initial subdivision of the surface and detects
sharp edges as well as flat or highly curved areas. One of the main
features of our method is to proceed by alternating the steps of
segmentation and normal vector estimation. The second order
segmentation subdivides the surface according to principal curvatures
and provides a sufficient foundation for the classification of simple
algebraic surfaces. If the boundary of the original object contains
such surfaces the segmentation is optimized based on the result of a
surface fitting procedure. / Im Reverse Engineering wird ein existierendes Objekt aus einer Menge
von Oberflächenpunkten digital rekonstruiert. Während der
Segmentierungsphase werden diese Punkte in Teilmengen zusammengefügt,
um die nachfolgenden Schritte wie Flächenerkennung (surface fitting)
zu vereinfachen. Wir präsentieren in dieser Arbeit eine Methode zur
Segmentierung der Punkte und die anschließende Klassifikation
einfacher algebraischen Flächen. Unser Verfahren ist direkt in dem
Sinne, dass es direkt an den Punkten arbeitet, im Gegensatz zu
anderen Verfahren, die auf einer Triangulierung der Punktmenge
basieren.
Der Rekonstruktionsprozess schließt einen neuen Algorithmus zur
Berechnung der k-nächsten Nachbarn eines Oberflächenpunktes und
Verfahren zur Schätzung der Flächeneigenschaften ersten und zweiten
Grades ein. Die normalenbasierte Segmentierung (Segmentierung ersten
Grades) liefert eine Aufteilung des Objektes und detekiert scharfe
Kanten, sowie flache oder stark gekrümmte Gebiete des Objektes. Ein
zentrales Element unserer Methode ist die Wiederholung der Schritte
der Segmentierung und der Schätzung der Normalen. Erst die Iteration
ermöglicht die Schätzung der Normalen in der benötigten Genauigkeit
und die Generierung einer zufriedenstellender Segmentierung. Die
Segmentierung zweiten Grades teilt die Oberfläche nach den
Hauptkrümmungen auf und bietet eine zuverlässige Grundlage für die
Klassifizierung einfacher algebraischen Flächen. Falls der Rand des
Ausgangsobjektes solche Flächen enthält, wird der
Segmentierungsprozess auf der Grundlage des Ergebnisses der
Flächenerkennungsprozedur optimiert.
|
572 |
ANALYSE DE TEXTURE PAR METHODES MARKOVIENNES ET PAR MORPHOLOGIE MATHEMATIQUE : APPLICATION A L'ANALYSE DES ZONES URBAINES SUR DES IMAGES SATELLITALES /LORETTE, ANNE. Zerubia, Josiane January 1999 (has links)
Thèse de doctorat : SCIENCES ET TECHNIQUES : Nice : 1999. / 1999NICE5327. 134 ref.
|
573 |
Temporally consistent semantic segmentation in videosRaza, Syed H. 08 June 2015 (has links)
The objective of this Thesis research is to develop algorithms for temporally consistent semantic segmentation in videos. Though many different forms of semantic segmentations exist, this research is focused on the problem of temporally-consistent holistic scene understanding in outdoor videos. Holistic scene understanding requires an understanding of many individual aspects of the scene including 3D layout, objects present, occlusion boundaries, and depth. Such a description of a dynamic scene would be useful for many robotic applications including object reasoning, 3D perception, video analysis, video coding, segmentation, navigation and activity recognition.
Scene understanding has been studied with great success for still images. However, scene understanding in videos requires additional approaches to account for the temporal variation, dynamic information, and exploiting causality. As a first step, image-based scene understanding methods can be directly applied to individual video frames to generate a description of the scene. However, these methods do not exploit temporal information across neighboring frames. Further, lacking temporal consistency, image-based methods can result in temporally-inconsistent labels across frames. This inconsistency can impact performance, as scene labels suddenly change between frames.
The objective of our this study is to develop temporally consistent scene descriptive algorithms by processing videos efficiently, exploiting causality and data-redundancy, and cater for scene dynamics. Specifically, we achieve our research objectives by (1) extracting geometric context from videos to give broad 3D structure of the scene with all objects present, (2) Detecting occlusion boundaries in videos due to depth discontinuity, (3) Estimating depth in videos by combining monocular and motion features with semantic features and occlusion boundaries.
|
574 |
Visual object category discovery in images and videosLee, Yong Jae, 1984- 12 July 2012 (has links)
The current trend in visual recognition research is to place a strict division between the supervised and unsupervised learning paradigms, which is problematic for two main reasons. On the one hand, supervised methods require training data for each and every category that the system learns; training data may not always be available and is expensive to obtain. On the other hand, unsupervised methods must determine the optimal visual cues and distance metrics that distinguish one category from another to group images into semantically meaningful categories; however, for unlabeled data, these are unknown a priori.
I propose a visual category discovery framework that transcends the two paradigms and learns accurate models with few labeled exemplars. The main insight is to automatically focus on the prevalent objects in images and videos, and learn models from them for category grouping, segmentation, and summarization.
To implement this idea, I first present a context-aware category discovery framework that discovers novel categories by leveraging context from previously learned categories. I devise a novel object-graph descriptor to model the interaction between a set of known categories and the unknown to-be-discovered categories, and group regions that have similar appearance and similar object-graphs. I then present a collective segmentation framework that simultaneously discovers the segmentations and groupings of objects by leveraging the shared patterns in the unlabeled image collection. It discovers an ensemble of representative instances for each unknown category, and builds top-down models from them to refine the segmentation of the remaining instances. Finally, building on these techniques, I show how to produce compact visual summaries for first-person egocentric videos that focus on the important people and objects. The system leverages novel egocentric and high-level saliency features to predict important regions in the video, and produces a concise visual summary that is driven by those regions.
I compare against existing state-of-the-art methods for category discovery and segmentation on several challenging benchmark datasets. I demonstrate that we can discover visual concepts more accurately by focusing on the prevalent objects in images and videos, and show clear advantages of departing from the status quo division between the supervised and unsupervised learning paradigms. The main impact of my thesis is that it lays the groundwork for building large-scale visual discovery systems that can automatically discover visual concepts with minimal human supervision. / text
|
575 |
Identifying key disseminators in social commerce : a segmentation study from the gatekeeping perspectiveChen, Yizhuo 03 August 2012 (has links)
In recent years, social commerce sites such as Groupon and LivingSocial have achieved great success in attracting new consumers and increasing store traffic for a growing number of businesses. However, it is still unclear how the information flow to reach new consumers is generated. Understanding this information flow is the key to the question of what lead to the success of these companies. In the online context, the key information disseminators can have both a large-scale network and a decisive influence on the nodes that are connected closely to them, indicating an important pattern of consumer purchase process. Here, we argue that one of the prominent advantages of social commerce is the information dissemination process, during which word of mouth (WOM) is generated to boost consumer traffic. In the present study, we conduct a cluster analysis to segment online shoppers according to their information dissemination contribution. Gatekeeping theory was used for conceptualizing consumers who tend to disseminate more commercial information and WOM in social commerce, providing us the theoretical basis for clustering consumers. Our findings suggest that a sizable proportion of consumers constituted the gatekeeper group (approximately 25%). Gatekeepers tend to be highly active in both finding the outside source of information and connecting it with inside social networks. In addition, different aspects of the potential to become gatekeepers divide the rest of the consumers into two groups. To date, the present research is the first to explore online consumer segmentations using the gatekeeping perspective. / text
|
576 |
Region detection and matching for object recognitionKim, Jaechul 20 September 2013 (has links)
In this thesis, I explore region detection and consider its impact on image matching for exemplar-based object recognition. Detecting regions is important to provide semantically meaningful spatial cues in images. Matching establishes similarity between visual entities, which is crucial for recognition. My thesis starts by detecting regions in both local and object level. Then, I leverage geometric cues of the detected regions to improve image matching for the ultimate goal of object recognition. More specifically, my thesis considers four key questions: 1) how can we extract distinctively-shaped local regions that also ensure repeatability for robust matching? 2) how can object-level shape inform bottom-up image segmentation? 3) how should the spatial layout imposed by segmented regions influence image matching for exemplar-based recognition? and 4) how can we exploit regions to improve the accuracy and speed of dense image matching? I propose novel algorithms to tackle these issues, addressing region-based visual perception from low-level local region extraction, to mid-level object segmentation, to high-level region-based matching and recognition. First, I propose a Boundary Preserving Local Region (BPLR) detector to extract local shapes. My approach defines a novel spanning-tree based image representation whose structure reflects shape cues combined from multiple segmentations, which in turn provide multiple initial hypotheses of the object boundaries. Unlike traditional local region detectors that rely on local cues like color and texture, BPLRs explicitly exploit the segmentation that encodes global object shape. Thus, they respect object boundaries more robustly and reduce noisy regions that straddle object boundaries. The resulting detector yields a dense set of local regions that are both distinctive in shape as well as repeatable for robust matching. Second, building on the strength of the BPLR regions, I develop an approach for object-level segmentation. The key insight of the approach is that objects shapes are (at least partially) shared among different object categories--for example, among different animals, among different vehicles, or even among seemingly different objects. This shape sharing phenomenon allows us to use partial shape matching via BPLR-detected regions to predict global object shape of possibly unfamiliar objects in new images. Unlike existing top-down methods, my approach requires no category-specific knowledge on the object to be segmented. In addition, because it relies on exemplar-based matching to generate shape hypotheses, my approach overcomes the viewpoint sensitivity of existing methods by allowing shape exemplars to span arbitrary poses and classes. For the ultimate goal of region-based recognition, not only is it important to detect good regions, but we must also be able to match them reliably. A matching establishes similarity between visual entities (images, objects or scenes), which is fundamental for visual recognition. Thus, in the third major component of this thesis, I explore how to leverage geometric cues of the segmented regions for accurate image matching. To this end, I propose a segmentation-guided local feature matching strategy, in which segmentation suggests spatial layout among the matched local features within each region. To encode such spatial structures, I devise a string representation whose 1D nature enables efficient computation to enforce geometric constraints. The method is applied for exemplar-based object classification to demonstrate the impact of my segmentation-driven matching approach. Finally, building on the idea of regions for geometric regularization in image matching, I consider how a hierarchy of nested image regions can be used to constrain dense image feature matches at multiple scales simultaneously. Moving beyond individual regions, the last part of my thesis studies how to exploit regions' inherent hierarchical structure to improve the image matching. To this end, I propose a deformable spatial pyramid graphical model for image matching. The proposed model considers multiple spatial extents at once--from an entire image to grid cells to every single pixel. The proposed pyramid model strikes a balance between robust regularization by larger spatial supports on the one hand and accurate localization by finer regions on the other. Further, the pyramid model is suitable for fast coarse-to-fine hierarchical optimization. I apply the method to pixel label transfer tasks for semantic image segmentation, improving upon the state-of-the-art in both accuracy and speed. Throughout, I provide extensive evaluations on challenging benchmark datasets, validating the effectiveness of my approach. In contrast to traditional texture-based object recognition, my region-based approach enables to use strong geometric cues such as shape and spatial layout that advance the state-of-the-art of object recognition. Also, I show that regions' inherent hierarchical structure allows fast image matching for scalable recognition. The outcome realizes the promising potential of region-based visual perception. In addition, all my codes for local shape detector, object segmentation, and image matching are publicly available, which I hope will serve as useful new additions for vision researchers' toolbox. / text
|
577 |
Does Vocabulary Knowledge Affect Lexical Segmentation in Adverse Conditions?Bishell, Michelle January 2015 (has links)
There is significant variability in the ability of listeners to perceive degraded speech. Existing research has suggested that vocabulary knowledge is one factor that differentiates better listeners from poorer ones, though the reason for such a relationship is unclear. This study aimed to investigate whether a relationship exists between vocabulary knowledge and the type of lexical segmentation strategy listeners use in adverse conditions. This study conducted error pattern analysis using an existing dataset of 34 normal-hearing listeners (11 males, 23 females, aged 18 to 35) who participated in a speech recognition in noise task. Listeners were divided into a higher vocabulary (HV) and a lower vocabulary (LV) group based on their receptive vocabulary score on the Peabody Picture Vocabulary Test (PPVT). Lexical boundary errors (LBEs) were analysed to examine whether the groups showed differential use of syllabic strength cues for lexical segmentation. Word substitution errors (WSEs) were also analysed to examine patterns in phoneme identification. The type and number of errors were compared between the HV and LV groups. Simple linear regression showed a significant relationship between vocabulary and performance on the speech recognition task. Independent samples t-tests showed no significant differences between the HV and LV groups in Metrical Segmentation Strategy (MSS) ratio or number of LBEs. Further independent samples t-tests showed no significant differences between the WSEs produced by HV and LV groups in the degree of phonemic resemblance to the target. There was no significant difference in the proportion of target phrases to which HV and LV listeners responded. The results of this study suggest that vocabulary knowledge does not affect lexical segmentation strategy in adverse conditions. Further research is required to investigate why higher vocabulary listeners appear to perform better on speech recognition tasks.
|
578 |
Vascular plaque detection using texture based segmentation of optical coherence tomography imagesOcaña Macias Mariano 14 September 2015 (has links)
Abstract
Cardiovascular disease is one of the leading causes of death in Canada. Atherosclerosis is
considered the primary cause for cardiovascular disease. Optical coherence tomography (OCT)
provides a means to minimally invasive imaging and assessment of textural features of
atherosclerotic plaque. However, detecting atherosclerotic plaque by visual inspection from
Optical Coherence Tomography (OCT) images is usually difficult. Therefore we
developed unsupervised segmentation algorithms to automatically detect atherosclerosis plaque
from OCT images. We used three different clustering methods to identify atherosclerotic plaque
automatically from OCT images. Our method involves data preprocessing of raw OCT images,
feature selection and texture feature extraction using the Spatial Gray Level Dependence Matrix
method (SGLDM), and the application of three different clustering techniques: K-means, Fuzzy
C-means and Gustafson-Kessel algorithms to segment the plaque regions from OCT images and
to map the cluster regions (background, vascular tissue, OCT degraded signal region and
Atherosclerosis plaque) from the feature-space back to the original preprocessed OCT image.
We validated our results by comparing our segmented OCT images with actual photographic
images of vascular tissue with plaque. / October 2015
|
579 |
Real time video segmentation for recognising paint marks on bad wooden railway sleepersShaik, Asif ur Rahman January 2008 (has links)
Wooden railway sleeper inspections in Sweden are currently performed manually by a human operator; such inspections are based on visual analysis. Machine vision based approach has been done to emulate the visual abilities of human operator to enable automation of the process. Through this process bad sleepers are identified, and a spot is marked on it with specific color (blue in the current case) on the rail so that the maintenance operators are able to identify the spot and replace the sleeper. The motive of this thesis is to help the operators to identify those sleepers which are marked by color (spots), using an “Intelligent Vehicle” which is capable of running on the track. Capturing video while running on the track and segmenting the object of interest (spot) through this vehicle; we can automate this work and minimize the human intuitions. The video acquisition process depends on camera position and source light to obtain fine brightness in acquisition, we have tested 4 different types of combinations (camera position and source light) here to record the video and test the validity of proposed method. A sequence of real time rail frames are extracted from these videos and further processing (depending upon the data acquisition process) is done to identify the spots. After identification of spot each frame is divided in to 9 regions to know the particular region where the spot lies to avoid overlapping with noise, and so on. The proposed method will generate the information regarding in which region the spot lies, based on nine regions in each frame. From the generated results we have made some classification regarding data collection techniques, efficiency, time and speed. In this report, extensive experiments using image sequences from particular camera are reported and the experiments were done using intelligent vehicle as well as test vehicle and the results shows that we have achieved 95% success in identifying the spots when we use video as it is, in other method were we can skip some frames in pre-processing to increase the speed of video but the segmentation results we reduced to 85% and the time was very less compared to previous one. This shows the validity of proposed method in identification of spots lying on wooden railway sleepers where we can compromise between time and efficiency to get the desired result.
|
580 |
Επεξεργασία εικόνων cDNA μικροσυστοιχιών βασισμένη σε μετασχηματισμούς κυματιδίων και τυχαίων πεδίων Markov / Complementary DNA microarray image processing based on wavelets and Markov random fields modelsΜάντουκας, Θεόδωρος 27 April 2009 (has links)
Οι μικροσυστοιχίες συμπληρωματικού DNA (cDNA) αποτελούν αποδοτικά και αποτελεσματικά μέσα για την ταυτόχρονη ανάλυση της λειτουργίας δεκάδων χιλιάδων γονιδίων. Μια τυπική cDNA εικόνα μικροσυστοιχιών αποτελεί μια συλλογή πράσινων και κόκκινων κηλίδων (spots) που περιέχουν DNA. Κάθε κηλίδα καταλαμβάνει ένα μικρό τμήμα της εικόνας, με τη μέση τιμή της έντασης της κηλίδας να είναι στενά συνδεδεμένη με το επίπεδο έκφρασης του αντίστοιχου γονιδου. Η κύρια διαδικασία υπολογισμού της έντασης περιλαμβάνει τρία στάδια: Διευθυνσιοδότηση ή κατασκευή πλέγματος (gridding), κατάτμηση (Segmentation) και τέλος η διαδικασία εξαγωγής έντασης.
Στη παρούσα εργασία, η διευθυνσιοδότηση πραγματοποιήθηκε χρησιμοποιώντας μια αυτόματη τεχνική κατασκευής πλέγματος στηριζόμενη στον συνεχή μετασχηματισμό κυματιδίων (CWT). Ποιο συγκεκριμένα,, υπολογίστηκαν τα προφίλ του χ και y άξονα της εικόνας. Δεύτερον, εφαρμόστηκε, σε κάθε προφίλ, ο CWT μετασχηματισμός εώς το 15 επίπεδο, χρησιμοποιώντας daubechies 4 (db4) ως μητρικό κυματίδιο. Τρίτον, υπολογίστηκε το άθροισμα των 15 επιπέδων για κάθε ένα από τα δύο σήματα x και y. Τέταρτον, εφαρμόστηκε στα δύο νέα σήματα τεχνική καταστολής θορύβου με χρήση μετασχηματισμού Wavelet. Τελικά, το κέντρο και όρια της κάθε κηλίδας καθορίστηκαν μέσω του υπολογισμού των τοπικών ελαχίστων και μέγιστων του κάθε σήματος.
Για την κατάτμηση της εικόνας, μια νέα μέθοδος προτάθηκε, η οποία διακρίνεται σε τρία βασικά βήματα: Πρώτον, ο à trous μετασχηματισμός κυματιδίων (AWT) εφαρμόστηκε έως το δεύτερο επίπεδο στην αρχική εικόνα. Δεύτερον, στις λεπτομέρειες (details coefficients) του κάθε επιπέδου εφαρμόστηκε φίλτρο καταστολής θορύβου, προκειμένου να υποβαθμιστεί ο θόρυβος. Τρίτον, η αρχική εικόνα μαζί με τις προσεγγίσεις (approximations) και τις λεπτομέρειες (details) του κάθε επιπέδου εφαρμόστηκαν σε ένα συλλογικό σχήμα (ensemble scheme) στηριζόμενο στο MRF μοντέλο κατάτμησης. Ως τελεστές του σχήματος χρησιμοποιήθηκαν οι: Majority Vote , Min, Product και Probabilistic Product.
Η αξιολόγηση των προτεινόμενων αλγορίθμων πραγματοποιήθηκε με τη χρήση υψηλής ποιότητας εικόνας προσομοίωσης αποτελούμενη από 1040 κηλίδες (spots) με ρεαλιστικά μορφολογικά χαρακτηριστικά, η οποία δημιουργήθηκε σύμφωνα με το μοντέλο προσημείωσης μικροσυστοιχιών του Matlab καθώς και 14 πραγματικές εικόνες, επτά 16-bit grayscale TIFF εικόνες από κάθε κανάλι (κόκκινο και πράσινο), οι οποίες αποκτήθηκαν από την ευρέως διαδεδομένη βάση δεδομένων DERICI. Επιπλέον, προκειμένου να παρατηρηθεί η συμπεριφορά των αλγορίθμων στη παρουσία θορύβου, η απομιμούμενη εικόνα υποβαθμίστηκε με τη προσθήκη λευκού Gaussian θορύβου.
Η ακρίβεια της ακολουθούμενης διαδικασίας κατάτμησης, στη περίπτωση της εικόνας προσομοίωσης, προσδιορίστηκε μέσω του segmentation matching factor (SMF), probability of error (PE) και coefficient of determination (CD) με σεβασμό στη πραγματική κλάση στην οποία ανήκουν (φόντο-υπόβαθρο). Στη περίπτωση των πραγματικών εικόνων η αξιοπιστία των αλγορίθμων προσδιορίστηκε έμμεσα μετρώντας την ένταση κάθε κηλίδας, μέσω του Mean Absolute Error (MAE).
Το σύνολο των αλγορίθμων, εφαρμοσμένο στην απομιμούμενη εικόνα, κατάφερε να οδηγήσει σε καλύτερο προσδιορισμό των κηλίδων σε σχέση με το απλό MRF μοντέλο κατάτμησης. Επιπλέον, ο τελεστής Majority Vote επέτυχε το υψηλότερο ποσοστό σε όλες τις περιπτώσεις, ειδικά σε κελία (cells) με υψηλή παρουσία θορύβου (SMF: 82.69%, PE: 6.60% and CV:0.809 ), ενώ το απλό μοντέλο περιορίστηκε στο χαμηλότερο ποσοστό (SMF:94.87%-82.69%, PE:3.03%-9.85%, CV:0.961-0.729). Στη περίπτωση των πραγματικών εικόνων ο min τελεστής επέτυχε το χαμηλότερο ποσοστό (MAE: 803.96 and Normalized MAE: 0.0738), σε αντίθεση με τον τελεστή Majority Vote, ο οποίος κατάφερε να επιτύχει το υψηλότερο ποσοστό ανάμεσα στους χρησιμοποιούμενους τελεστές (MAE 990.49 and Normalized MAE 0.0738).Επιπλέον όλοι οι προτεινόμενοι αλγόριθμοι κατάφεραν να μειώσουν τη μέση τιμή MAE σε σχέση με το απλό μοντέλο MRF (MAE 1183.50 and Normalized MAE 0,0859). / Complementary DNA microarrays are a powerful and efficient tool that uses genome sequence information to analyze the structure and function of tens of thousands of genes simultaneously. A typical cDNA microarray image is a collection of green and red discrete spots containing DNA. Each spot occupies a small fraction on the image and its mean fluorescence intensity is closely related to the expression level of the genes. The main process for measuring spot intensity values involves three tasks: gridding, segmentation and data extraction.
In the present study, spot location was accomplished by using an automatic gridding method based on continues wavelet transform (CWT): Firstly, line-profiles for x and y axes were calculated. Secondly, the CWT was applied up to 15 scales to both profiles by using daubechies 4 (db4) as mother wavelet. Thirdly, a summation point by point of the signals of all the 15 scales was calculated. Fourthly, a hard-thresholding wavelet based technique was applied to each signal. Finally, spots centers and boundaries were defined by calculating the local maxima and the local minima on both signals.
The proposed segmentation method is divided into three major steps: Firstly,
à trous wavelet transform was applied up to second scale on the initial cell. Secondly, on the details coefficients, a hard threshold filter was carried out in order to suppress the noise. Finally, the initial image among the approximations and details of each scale were implemented in an ensemble scheme based on MRF model. As operators of the ensemble scheme were chosen: Majority Vote, Min, Product and Probabilistic Product.
The validation of the proposed algorithms was accomplished by a high quality simulated microarray image of 1040 cells with realistic morphological characteristics generated by using the Matlab microarray simulation model and fourteen real cDNA microarray images, seven 16-bit grayscale TIFF images of both channels (green and red), collected from the DERICI public database. In order to investigate the performance of the algorithms in presence of noise, the simulated image was corrupted with additive white Gaussian noise.
In the case of simulated image, the segmentation accuracy was evaluated by means of segmentation matching factor, probability of error and coefficient of determination in respect to the pixel actual classess (foreground-background pixels). In the case of real images the evaluation was based on Mean Absolute error (MAE), in order to measure indirectly their reliability.
According to our results in simulated cells, the proposed ensemble schemes managed to lead to more accurate spot determination in comparison to conventional MRF model. Additionally, the majority vote operator managed to accomplish the highest score in all cases, especially on cells with high noise (SMF: 82.69%,
PE: 6.60% and CV:0.809), while the conventional MRF managed to gather the lowest score in all cases (SMF:94.87%-82.69%, PE:3.03%-9.85%, CV:0.961-0.729). In the case of real images, the min operator achieved the lowest score (MAE: 803.96 and Normalized MAE: 0.0738) in contrast to majority vote, which reached the highest score among the proposed evaluating methods (MAE 990.49 and Normalized MAE 0.0738). Additionally, all the proposed algorithms managed to suppress MAE value compared to the conventional MRF segmentation model (MAE 1183.50 and Normalized MAE 0,0859).
|
Page generated in 0.0905 seconds