• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1489
  • 473
  • 437
  • 372
  • 104
  • 74
  • 68
  • 34
  • 33
  • 32
  • 28
  • 26
  • 21
  • 18
  • 11
  • Tagged with
  • 3676
  • 1096
  • 750
  • 488
  • 460
  • 450
  • 419
  • 390
  • 389
  • 348
  • 346
  • 328
  • 321
  • 317
  • 316
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
411

Psychographic questionnaires: a comparative review of scales and structures

Fuhr, Kelly January 1900 (has links)
Master of Science / Food Science Institute / Delores Chambers / Psychographic Questionnaires: A Comparative Review of Structures and Scales In recent years there has been a growing trend toward integrating psychographic profiles into sensory studies with the aim of more holistically explaining consumer segmentation and preferences. With this shift in approach have come questions on the nature of psychographic scales and the theoretical implications of their structure. Given the plethora of existing psychographic scales in common practice, the purpose of this review is to give a concise overview in the breadth of structures, with the aim of helping sensory researchers identify the most appropriate scale for their needs. The review begins with a critical comparison of the three most common scale classes: Likert, Semantic Differential, and Behavioral Frequency, and their relative advantages and disadvantages. Following that, a review of psychographic questionnaire design highlights differences from sensory practices, drawing attention to sources of response bias in specific design typologies which may reduce data quality in a product design.
412

Développement de logiciels de thermographie infrarouge visant à améliorer le contrôle de la qualité de la pose de l’enrobé bitumineux

Vézina, Martin January 2014 (has links)
Les fissures et les nids-de-poule sont des défauts très présents sur les routes du réseau routier québécois. Un bon contrôle de la qualité lors de la pose de l’enrobé bitumineux permet de diminuer les risques d’apparition de ces défauts. Le ministère des Transports du Québec (MTQ) utilise la thermographie infrarouge afin de détecter les zones non conformes, soit celles qui deviendront des nids-de-poule ou des fissures. Des variations thermiques sur l’image infrarouge permettent la détection de ces zones. Toutefois, les logiciels utilisés par le MTQ ne sont pas appropriés pour détecter les zones non conformes. Ce mémoire présente deux méthodes de détection automatique des zones non conformes. La première permet l’analyse des images prises par une caméra thermique alors que la seconde permet d’analyser en continu les données provenant d’un scanneur infrarouge. Ces deux méthodes utilisent des techniques de segmentation afin de détecter les zones non conformes. Elles permettent l’analyse automatique des données sans qu’aucune intervention humaine ne soit nécessaire.
413

Automated hippocampal location and extraction

Bonnici, Heidi M. January 2010 (has links)
The hippocampus is a complex brain structure that has been studied extensively and is subject to abnormal structural change in various neuropsychiatric disorders. The highest definition in vivo method of visualizing the anatomy of this structure is structural Magnetic Resonance Imaging (MRI). Gross structure can be assessed by the naked eye inspection of MRI scans but measurement is required to compare scans from individuals within normal ranges, and to assess change over time in individuals. The gold standard of such measurement is manual tracing of the boundaries of the hippocampus on scans. This is known as a Region Of Interest (ROI) approach. ROI is laborious and there are difficulties with test-retest and inter-rater reliability. These difficulties are primarily due to uncertainty in designation of the hippocampus boundary. An improved, less labour intensive and more reliable method is clearly desirable. This thesis describes a fully automated hybrid methodology that is able to first locate and then extract hippocampal volumes from 3D 1.5T MRI T1 brain scans automatically. The hybrid algorithm uses brain atlas mappings and fuzzy inference to locate hippocampal areas and create initial hippocampal boundaries. This initial location is used to seed a deformable manifold algorithm. Rule based deformations are then applied to refine the estimate of the hippocampus locations. Finally, the hippocampus boundaries are corrected through an inference process that assures adherence to an expected hippocampus volume. The ICC values of this methodology when compared to the manual segmentation of the same hippocampi result in a 0.73 for the left and 0.81 for the right hippocampi. These values both fall within the range of reliability testing according to the manual ‘gold standard’ technique. Thus, this thesis describes the development and validation of a genuinely automated approach to hippocampal volume extraction of potential utility in studies of a range of neuropsychiatric disorders and could eventually find clinical applications.
414

Family Plans: Market Segmentation with Nonlinear Pricing

Zhou, Bo January 2014 (has links)
<p>In the telecommunications market, firms often give consumers the option of purchasing an individual plan or a family plan. An individual plan gives a certain allowance of usage (e.g., minutes, data) for a single consumer, whereas a family plan allows multiple consumers to share a specific level of usage. The theoretical challenge is to understand how the firm stands to benefit from allowing family plans. In this paper, we use a game-theoretic framework to explore the role of family plans. An obvious way that family plans can be profitable is if it draws in very low-valuation consumers whom the firm would choose not to serve in the absence of a family plan. Interestingly, we find that even when a family plan does not draw any new consumers into the market, a firm can still benefit from offering it. This finding occurs primarily because of the strategic impact of the family plan on the firm's entire product line. By allowing high- and low-valuation consumers to share joint allowance in the family plan, the firm is able to raise the price to extract more surplus from the individual high-valuation consumers by reducing the cannibalization problem. Furthermore, a family obtains a higher allowance compared to the purchase of several individual plans and therefore contributes more profits to the firm. We also observe different types of quantity discounts in the firm's product line. Finally, we identify conditions under which the firm offers a pay-as-you-go plan.</p> / Dissertation
415

Brain perfusion imaging : performance and accuracy

Zhu, Fan January 2013 (has links)
Brain perfusion weighted images acquired using dynamic contrast studies have an important clinical role in acute stroke diagnosis and treatment decisions. The purpose of my PhD research is to develop novel methodologies for improving the efficiency and quality of brain perfusion-imaging analysis so that clinical decisions can be made more accurately and in a shorter time. This thesis consists of three parts: My research investigates the possibility that parallel computing brings to make perfusion-imaging analysis faster in order to deliver results that are used in stroke diagnosis earlier. Brain perfusion analysis using local Arterial Input Functions (AIF) techniques takes a long time to execute due to its heavy computational load. As time is vitally important in the case of acute stroke, reducing analysis time and therefore diagnosis time can reduce the number of brain cells damaged and improve the chances for patient recovery. We present the implementation of a deconvolution algorithm for brain perfusion quantification on GPGPU (General Purpose computing on Graphics Processing Units) using the CUDA programming model. Our method aims to accelerate the process without any quality loss. Specific features of perfusion source images are also used to reduce noise impact, which consequently improves the accuracy of hemodynamic maps. The majority of existing approaches for denoising CT images are optimized for 3D (spatial) information, including spatial decimation (spatially weighted mean filters) and techniques based on wavelet and curvelet transforms. However, perfusion imaging data is 4D as it also contains temporal information. Our approach using Gaussian process regression (GPR) makes use of the temporal information in the perfusion source imges to reduce the noise level. Over the entire image, our noise reduction method based on Gaussian process regression gains a 99% contrast-to-noise ratio improvement over the raw image and also improves the quality of hemodynamic maps, allowing a better identification of edges and detailed information. At the level of individual voxels, GPR provides a stable baseline, helps identify key parameters from tissue time-concentration curves and reduces the oscillations in the curves. Furthermore, the results show that GPR is superior to the alternative techniques compared in this study. My research also explores automatic segmentation of perfusion images into potentially healthy areas and lesion areas, which can be used as additional information that assists in clinical diagnosis. Since perfusion source images contain more information than hemodynamic maps, good utilisation of source images leads to better understanding than the hemodynamic maps alone. Correlation coefficient tests are used to measure the similarities between the expected tissue time-concentration curves (from reference tissue) and the measured time-concentration curves (from target tissue). This information is then used to distinguish tissues at risk and dead tissues from healthy tissues. A correlation coefficient based signal analysis method that directly spots suspected lesion areas from perfusion source images is presented. Our method delivers a clear automatic segmentation of healthy tissue, tissue at risk and dead tissue. From our segmentation maps, it is easier to identify lesion boundaries than using traditional hemodynamic maps.
416

Empirical investigation into the use of complexity levels in marketing segmentation and the categorisation of new automotive products

Taylor-West, Paul January 2013 (has links)
This thesis is set in the context of the automotive industry where launches of new products with high levels of technical innovations are becoming increasingly complex for consumers to comprehend. Car manufacturers need to understand consumer perceptions of new models so they can categorise their products form the consumer perspective, to obtain a more accurate indication as to where their products fit within the increasingly defined consumer segments. Situational and personal variables now play the most important roles in marketing. In the area of nested segmentation consumer variables are only concerned with their needs, attitudes, motivations and perceptions and overlook any previous experience, exposure or familiarity that a consumer may or may not have had with the product. It is argued here that consumers have differing perceptions of newness and that asking how new and new to whom would be valid questions for marketers when introducing new products. If car manufacturers can categorise their products in terms of newness for specific consumers based on their levels of Expertise, Involvement and Familiarity with the product, manufacturers will be able to target appropriate markets more effectively. To explore this area a mixed methods research approach was applied. This research found that the level of Involvement with the product, from a motivational aspect, gave rise to different levels of interest and enthusiasm between consumers and has a direct impact on how different types of consumers view new products. In addition the differing levels of consumer knowledge highlights the need to improve targeting of marketing communications so that manufacturers provide a better understanding of complex new products to consumers. Current mass marketing methods based on consumer demographics are no longer sufficient. This research found that a consumer s level of Expertise, Involvement and Familiarity (EIF) with a specific product can be captured using a multi-dimensional scale to measure consumer product knowledge and provide an accurate consumer segmentation tool. By offering different explanations of product innovations to these consumer segments, according to a customer's EIF, marketers will achieve more effective targeting, reduce marketing costs and increase marketing campaign response.
417

Composition and genomic organization of arthropod Hox clusters

Pace, Ryan M., Grbić, Miodrag, Nagy, Lisa M. 10 May 2016 (has links)
Univ Arizona, Dept Mol & Cellular Biol
418

Microarray image processing : a novel neural network framework

Zineddin, Bachar January 2011 (has links)
Due to the vast success of bioengineering techniques, a series of large-scale analysis tools has been developed to discover the functional organization of cells. Among them, cDNA microarray has emerged as a powerful technology that enables biologists to cDNA microarray technology has enabled biologists to study thousands of genes simultaneously within an entire organism, and thus obtain a better understanding of the gene interaction and regulation mechanisms involved. Although microarray technology has been developed so as to offer high tolerances, there exists high signal irregularity through the surface of the microarray image. The imperfection in the microarray image generation process causes noises of many types, which contaminate the resulting image. These errors and noises will propagate down through, and can significantly affect, all subsequent processing and analysis. Therefore, to realize the potential of such technology it is crucial to obtain high quality image data that would indeed reflect the underlying biology in the samples. One of the key steps in extracting information from a microarray image is segmentation: identifying which pixels within an image represent which gene. This area of spotted microarray image analysis has received relatively little attention relative to the advances in proceeding analysis stages. But, the lack of advanced image analysis, including the segmentation, results in sub-optimal data being used in all downstream analysis methods. Although there is recently much research on microarray image analysis with many methods have been proposed, some methods produce better results than others. In general, the most effective approaches require considerable run time (processing) power to process an entire image. Furthermore, there has been little progress on developing sufficiently fast yet efficient and effective algorithms the segmentation of the microarray image by using a highly sophisticated framework such as Cellular Neural Networks (CNNs). It is, therefore, the aim of this thesis to investigate and develop novel methods processing microarray images. The goal is to produce results that outperform the currently available approaches in terms of PSNR, k-means and ICC measurements.
419

Privacy Protecting Surveillance: A Proof-of-Concept Demonstrator / Demonstrator för integritetsskyddad övervakning

Fredrik, Hemström January 2015 (has links)
Visual surveillance systems are increasingly common in our society today. There is a conflict between the demands for security of the public and the demands to preserve the personal integrity. This thesis suggests a solution in which parts of the surveillance images are covered in order to conceal the identities of persons appearing in video, but not their actions or activities. The covered parts could be encrypted and unlocked only by the police or another legal authority in case of a crime. This thesis implements a proof-of-concept demonstrator using a combination of image processing techniques such as foreground segmentation, mathematical morphology, geometric camera calibration and region tracking. The demonstrator is capable of tracking a moderate number of moving objects and conceal their identity by replacing them with a mask or a blurred image. Functionality for replaying recorded data and unlocking individual persons are included. The concept demonstrator shows the chain from concealing the identities of persons to unlocking only a single person on recorded data. Evaluation on a publicly available dataset shows overall good performance.
420

Classification of skin tumours through the analysis of unconstrained images

Viana, Joaquim Mesquita da Cunha January 2009 (has links)
Skin cancer is the most frequent malignant neoplasm for Caucasian individuals. According to the Skin Cancer Foundation, the incidence of melanoma, the most malignant of skin tumours, and resultant mortality, have increased exponentially during the past 30 years, and continues to grow. [1]. Although often intractable in advanced stages, skin cancer in general and melanoma in particular, if detected in an early stage, can achieve cure ratios of over 95% [1,55]. Early screening of the lesions is, therefore, crucial, if a cure is to be achieved. Most skin lesions classification systems rely on a human expert supported dermatoscopy, which is an enhanced and zoomed photograph of the lesion zone. Nevertheless and although contrary claims exist, as far as is known by the author, classification results are currently rather inaccurate and need to be verified through a laboratory analysis of a piece of the lesion’s tissue. The aim of this research was to design and implement a system that was able to automatically classify skin spots as inoffensive or dangerous, with a small margin of error; if possible, with higher accuracy than the results achieved normally by a human expert and certainly better than any existing automatic system. The system described in this thesis meets these criteria. It is able to capture an unconstrained image of the affected skin area and extract a set of relevant features that may lead to, and be representative of, the four main classification characteristics of skin lesions: Asymmetry; Border; Colour; and Diameter. These relevant features are then evaluated either through a Bayesian statistical process - both a simple k-Nearest Neighbour as well as a Fuzzy k-Nearest Neighbour classifier - a Support Vector Machine and an Artificial Neural Network in order to classify the skin spot as either being a Melanoma or not. The characteristics selected and used through all this work are, to the author’s knowledge, combined in an innovative manner. Rather than simply selecting absolute values from the images characteristics, those numbers were combined into ratios, providing a much greater independence from environment conditions during the process of image capture. Along this work, image gathering became one of the most challenging activities. In fact several of the initially potential sources failed and so, the author had to use all the pictures he could find, namely on the Internet. This limited the test set to 136 images, only. Nevertheless, the process results were excellent. The algorithms developed were implemented into a fully working system which was extensively tested. It gives a correct classification of between 76% and 92% – depending on the percentage of pictures used to train the system. In particular, the system gave no false negatives. This is crucial, since a system which gave false negatives may deter a patient from seeking further treatment with a disastrous outcome. These results are achieved by detecting precise edges for every lesion image, extracting features considered relevant according to the giving different weights to the various extracted features and submitting these values to six classification algorithms – k-Nearest Neighbour, Fuzzy k-Nearest Neighbour, Naïve Bayes, Tree Augmented Naïve Bayes, Support Vector Machine and Multilayer Perceptron - in order to determine the most reliable combined process. Training was carried out in a supervised way – all the lesions were previously classified by an expert on the field before being subject to the scrutiny of the system. The author is convinced that the work presented on this PhD thesis is a valid contribution to the field of skin cancer diagnostics. Albeit its scope is limited – one lesion per image – the results achieved by this arrangement of segmentation, feature extraction and classification algorithms showed this is the right path to achieving a reliable early screening system. If and when, to all these data, values for age, gender and evolution might be used as classification features, the results will, no doubt, become even more accurate, allowing for an improvement in the survival rates of skin cancer patients.

Page generated in 0.1074 seconds