Spelling suggestions: "subject:"egmentation"" "subject:"asegmentation""
411 |
Mutual Enhancement of Environment Recognition and Semantic Segmentation in Indoor EnvironmentChalla, Venkata Vamsi January 2024 (has links)
Background:The dynamic field of computer vision and artificial intelligence has continually evolved, pushing the boundaries in areas like semantic segmentation andenvironmental recognition, pivotal for indoor scene analysis. This research investigates the integration of these two technologies, examining their synergy and implicayions for enhancing indoor scene understanding. The application of this integrationspans across various domains, including smart home systems for enhanced ambientliving, navigation assistance for Cleaning robots, and advanced surveillance for security. Objectives: The primary goal is to assess the impact of integrating semantic segmentation data on the accuracy of environmental recognition algorithms in indoor environments. Additionally, the study explores how environmental context can enhance the precision and accuracy of contour-aware semantic segmentation. Methods: The research employed an extensive methodology, utilizing various machine learning models, including standard algorithms, Long Short-Term Memorynetworks, and ensemble methods. Transfer learning with models like EfficientNet B3, MobileNetV3 and Vision Tranformer was a key aspect of the experimentation. The experiments were designed to measure the effect of semantic segmentation on environmental recognition and its reciprocal influence. Results: The findings indicated that the integration of semantic segmentation data significantly enhanced the accuracy of environmental recognition algorithms. Conversely, incorporating environmental context into contour-aware semantic segmentation led to notable improvements in precision and accuracy, reflected in metrics such as Mean Intersection over Union(MIoU). Conclusion: This research underscores the mutual enhancement between semantic segmentation and environmental recognition, demonstrating how each technology significantly boosts the effectiveness of the other in indoor scene analysis. The integration of semantic segmentation data notably elevates the accuracy of environmental recognition algorithms, while the incorporation of environmental context into contour-aware semantic segmentation substantially improves its precision and accuracy.The results also open avenues for advancements in automated annotation processes, paving the way for smarter environmental interaction.
|
412 |
FGSSNet: Applying Feature-Guided Semantic Segmentation on real world floorplansNorrby, Hugo, Färm, Gabriel January 2024 (has links)
This master thesis introduces FGSSNet, a novel multi-headed feature-guided semantic segmentation (FGSS) architecture designed to improve the generalization ability of segmentation models on floorplans by injecting domain-specific information into the latent space, guiding the segmentation process. FGSSNet features a U-Net segmentation backbone with a jointly trained reconstruction head attached to the U-Net decoder, tasked with reconstructing the injected feature maps, forcing their utilization throughout the decoding process. A multi-headed dedicated feature extractor is used to extract the domain-specific feature maps used by the FGSSNet while also predicting the wall width used for our novel dynamic scaling algorithm, designed to ensure spatial consistency between the training and real-world floorplans. The results show that the reconstruction head proved redundant, diverting the networks attention away from the segmentation task, ultimately hindering its performance. Instead, the ablated reconstruction head model, FGSSNet-NoRec, showed increased performance by utilizing the injected features freely, showcasing their importance. FGSSNet-NoRec slightly improves the IoU performance of comparable U-Net models by achieving 79.3 wall IoU(%) on a preprocessed CubiCasa5K dataset while showing an average IoU increase of 3.0 (5.3%) units on the more challenging real-world floorplans, displaying a superior generalization performance by leveraging the injected domain-specific information.
|
413 |
Psychographic questionnaires: a comparative review of scales and structuresFuhr, Kelly January 1900 (has links)
Master of Science / Food Science Institute / Delores Chambers / Psychographic Questionnaires: A Comparative Review of Structures and Scales
In recent years there has been a growing trend toward integrating psychographic profiles into sensory studies with the aim of more holistically explaining consumer segmentation and preferences. With this shift in approach have come questions on the nature of psychographic scales and the theoretical implications of their structure. Given the plethora of existing psychographic scales in common practice, the purpose of this review is to give a concise overview in the breadth of structures, with the aim of helping sensory researchers identify the most appropriate scale for their needs. The review begins with a critical comparison of the three most common scale classes: Likert, Semantic Differential, and Behavioral Frequency, and their relative advantages and disadvantages. Following that, a review of psychographic questionnaire design highlights differences from sensory practices, drawing attention to sources of response bias in specific design typologies which may reduce data quality in a product design.
|
414 |
Développement de logiciels de thermographie infrarouge visant à améliorer le contrôle de la qualité de la pose de l’enrobé bitumineuxVézina, Martin January 2014 (has links)
Les fissures et les nids-de-poule sont des défauts très présents sur les routes du
réseau routier québécois. Un bon contrôle de la qualité lors de la pose de l’enrobé
bitumineux permet de diminuer les risques d’apparition de ces défauts. Le ministère
des Transports du Québec (MTQ) utilise la thermographie infrarouge afin de détecter
les zones non conformes, soit celles qui deviendront des nids-de-poule ou des fissures.
Des variations thermiques sur l’image infrarouge permettent la détection de ces zones.
Toutefois, les logiciels utilisés par le MTQ ne sont pas appropriés pour détecter les
zones non conformes. Ce mémoire présente deux méthodes de détection automatique
des zones non conformes. La première permet l’analyse des images prises par une
caméra thermique alors que la seconde permet d’analyser en continu les données
provenant d’un scanneur infrarouge. Ces deux méthodes utilisent des techniques de
segmentation afin de détecter les zones non conformes. Elles permettent l’analyse
automatique des données sans qu’aucune intervention humaine ne soit nécessaire.
|
415 |
Automated hippocampal location and extractionBonnici, Heidi M. January 2010 (has links)
The hippocampus is a complex brain structure that has been studied extensively and is subject to abnormal structural change in various neuropsychiatric disorders. The highest definition in vivo method of visualizing the anatomy of this structure is structural Magnetic Resonance Imaging (MRI). Gross structure can be assessed by the naked eye inspection of MRI scans but measurement is required to compare scans from individuals within normal ranges, and to assess change over time in individuals. The gold standard of such measurement is manual tracing of the boundaries of the hippocampus on scans. This is known as a Region Of Interest (ROI) approach. ROI is laborious and there are difficulties with test-retest and inter-rater reliability. These difficulties are primarily due to uncertainty in designation of the hippocampus boundary. An improved, less labour intensive and more reliable method is clearly desirable. This thesis describes a fully automated hybrid methodology that is able to first locate and then extract hippocampal volumes from 3D 1.5T MRI T1 brain scans automatically. The hybrid algorithm uses brain atlas mappings and fuzzy inference to locate hippocampal areas and create initial hippocampal boundaries. This initial location is used to seed a deformable manifold algorithm. Rule based deformations are then applied to refine the estimate of the hippocampus locations. Finally, the hippocampus boundaries are corrected through an inference process that assures adherence to an expected hippocampus volume. The ICC values of this methodology when compared to the manual segmentation of the same hippocampi result in a 0.73 for the left and 0.81 for the right hippocampi. These values both fall within the range of reliability testing according to the manual ‘gold standard’ technique. Thus, this thesis describes the development and validation of a genuinely automated approach to hippocampal volume extraction of potential utility in studies of a range of neuropsychiatric disorders and could eventually find clinical applications.
|
416 |
Family Plans: Market Segmentation with Nonlinear PricingZhou, Bo January 2014 (has links)
<p>In the telecommunications market, firms often give consumers the option of purchasing an individual plan or a family plan. An individual plan gives a certain allowance of usage (e.g., minutes, data) for a single consumer, whereas a family plan allows multiple consumers to share a specific level of usage. The theoretical challenge is to understand how the firm stands to benefit from allowing family plans. In this paper, we use a game-theoretic framework to explore the role of family plans. An obvious way that family plans can be profitable is if it draws in very low-valuation consumers whom the firm would choose not to serve in the absence of a family plan. Interestingly, we find that even when a family plan does not draw any new consumers into the market, a firm can still benefit from offering it. This finding occurs primarily because of the strategic impact of the family plan on the firm's entire product line. By allowing high- and low-valuation consumers to share joint allowance in the family plan, the firm is able to raise the price to extract more surplus from the individual high-valuation consumers by reducing the cannibalization problem. Furthermore, a family obtains a higher allowance compared to the purchase of several individual plans and therefore contributes more profits to the firm. We also observe different types of quantity discounts in the firm's product line. Finally, we identify conditions under which the firm offers a pay-as-you-go plan.</p> / Dissertation
|
417 |
Brain perfusion imaging : performance and accuracyZhu, Fan January 2013 (has links)
Brain perfusion weighted images acquired using dynamic contrast studies have an important clinical role in acute stroke diagnosis and treatment decisions. The purpose of my PhD research is to develop novel methodologies for improving the efficiency and quality of brain perfusion-imaging analysis so that clinical decisions can be made more accurately and in a shorter time. This thesis consists of three parts: My research investigates the possibility that parallel computing brings to make perfusion-imaging analysis faster in order to deliver results that are used in stroke diagnosis earlier. Brain perfusion analysis using local Arterial Input Functions (AIF) techniques takes a long time to execute due to its heavy computational load. As time is vitally important in the case of acute stroke, reducing analysis time and therefore diagnosis time can reduce the number of brain cells damaged and improve the chances for patient recovery. We present the implementation of a deconvolution algorithm for brain perfusion quantification on GPGPU (General Purpose computing on Graphics Processing Units) using the CUDA programming model. Our method aims to accelerate the process without any quality loss. Specific features of perfusion source images are also used to reduce noise impact, which consequently improves the accuracy of hemodynamic maps. The majority of existing approaches for denoising CT images are optimized for 3D (spatial) information, including spatial decimation (spatially weighted mean filters) and techniques based on wavelet and curvelet transforms. However, perfusion imaging data is 4D as it also contains temporal information. Our approach using Gaussian process regression (GPR) makes use of the temporal information in the perfusion source imges to reduce the noise level. Over the entire image, our noise reduction method based on Gaussian process regression gains a 99% contrast-to-noise ratio improvement over the raw image and also improves the quality of hemodynamic maps, allowing a better identification of edges and detailed information. At the level of individual voxels, GPR provides a stable baseline, helps identify key parameters from tissue time-concentration curves and reduces the oscillations in the curves. Furthermore, the results show that GPR is superior to the alternative techniques compared in this study. My research also explores automatic segmentation of perfusion images into potentially healthy areas and lesion areas, which can be used as additional information that assists in clinical diagnosis. Since perfusion source images contain more information than hemodynamic maps, good utilisation of source images leads to better understanding than the hemodynamic maps alone. Correlation coefficient tests are used to measure the similarities between the expected tissue time-concentration curves (from reference tissue) and the measured time-concentration curves (from target tissue). This information is then used to distinguish tissues at risk and dead tissues from healthy tissues. A correlation coefficient based signal analysis method that directly spots suspected lesion areas from perfusion source images is presented. Our method delivers a clear automatic segmentation of healthy tissue, tissue at risk and dead tissue. From our segmentation maps, it is easier to identify lesion boundaries than using traditional hemodynamic maps.
|
418 |
Empirical investigation into the use of complexity levels in marketing segmentation and the categorisation of new automotive productsTaylor-West, Paul January 2013 (has links)
This thesis is set in the context of the automotive industry where launches of new products with high levels of technical innovations are becoming increasingly complex for consumers to comprehend. Car manufacturers need to understand consumer perceptions of new models so they can categorise their products form the consumer perspective, to obtain a more accurate indication as to where their products fit within the increasingly defined consumer segments. Situational and personal variables now play the most important roles in marketing. In the area of nested segmentation consumer variables are only concerned with their needs, attitudes, motivations and perceptions and overlook any previous experience, exposure or familiarity that a consumer may or may not have had with the product. It is argued here that consumers have differing perceptions of newness and that asking how new and new to whom would be valid questions for marketers when introducing new products. If car manufacturers can categorise their products in terms of newness for specific consumers based on their levels of Expertise, Involvement and Familiarity with the product, manufacturers will be able to target appropriate markets more effectively. To explore this area a mixed methods research approach was applied. This research found that the level of Involvement with the product, from a motivational aspect, gave rise to different levels of interest and enthusiasm between consumers and has a direct impact on how different types of consumers view new products. In addition the differing levels of consumer knowledge highlights the need to improve targeting of marketing communications so that manufacturers provide a better understanding of complex new products to consumers. Current mass marketing methods based on consumer demographics are no longer sufficient. This research found that a consumer s level of Expertise, Involvement and Familiarity (EIF) with a specific product can be captured using a multi-dimensional scale to measure consumer product knowledge and provide an accurate consumer segmentation tool. By offering different explanations of product innovations to these consumer segments, according to a customer's EIF, marketers will achieve more effective targeting, reduce marketing costs and increase marketing campaign response.
|
419 |
Composition and genomic organization of arthropod Hox clustersPace, Ryan M., Grbić, Miodrag, Nagy, Lisa M. 10 May 2016 (has links)
Univ Arizona, Dept Mol & Cellular Biol
|
420 |
Microarray image processing : a novel neural network frameworkZineddin, Bachar January 2011 (has links)
Due to the vast success of bioengineering techniques, a series of large-scale analysis tools has been developed to discover the functional organization of cells. Among them, cDNA microarray has emerged as a powerful technology that enables biologists to cDNA microarray technology has enabled biologists to study thousands of genes simultaneously within an entire organism, and thus obtain a better understanding of the gene interaction and regulation mechanisms involved. Although microarray technology has been developed so as to offer high tolerances, there exists high signal irregularity through the surface of the microarray image. The imperfection in the microarray image generation process causes noises of many types, which contaminate the resulting image. These errors and noises will propagate down through, and can significantly affect, all subsequent processing and analysis. Therefore, to realize the potential of such technology it is crucial to obtain high quality image data that would indeed reflect the underlying biology in the samples. One of the key steps in extracting information from a microarray image is segmentation: identifying which pixels within an image represent which gene. This area of spotted microarray image analysis has received relatively little attention relative to the advances in proceeding analysis stages. But, the lack of advanced image analysis, including the segmentation, results in sub-optimal data being used in all downstream analysis methods. Although there is recently much research on microarray image analysis with many methods have been proposed, some methods produce better results than others. In general, the most effective approaches require considerable run time (processing) power to process an entire image. Furthermore, there has been little progress on developing sufficiently fast yet efficient and effective algorithms the segmentation of the microarray image by using a highly sophisticated framework such as Cellular Neural Networks (CNNs). It is, therefore, the aim of this thesis to investigate and develop novel methods processing microarray images. The goal is to produce results that outperform the currently available approaches in terms of PSNR, k-means and ICC measurements.
|
Page generated in 0.1464 seconds