Spelling suggestions: "subject:"normalization"" "subject:"formalization""
11 |
A Study of Segmentation and Normalization for Iris Recognition SystemsMohammadi Arvacheh, Ehsan January 2006 (has links)
Iris recognition systems capture an image from an individual's eye. The iris in the image is then segmented and normalized for feature extraction process. The performance of iris recognition systems highly depends on segmentation and normalization. For instance, even an effective feature extraction method would not be able to obtain useful information from an iris image that is not segmented or normalized properly. This thesis is to enhance the performance of segmentation and normalization processes in iris recognition systems to increase the overall accuracy. <br /><br /> The previous iris segmentation approaches assume that the boundary of pupil is a circle. However, according to our observation, circle cannot model this boundary accurately. To improve the quality of segmentation, a novel active contour is proposed to detect the irregular boundary of pupil. The method can successfully detect all the pupil boundaries in the CASIA database and increase the recognition accuracy. <br /><br /> Most previous normalization approaches employ polar coordinate system to transform iris. Transforming iris into polar coordinates requires a reference point as the polar origin. Since pupil and limbus are generally non-concentric, there are two natural choices, pupil center and limbus center. However, their performance differences have not been investigated so far. We also propose a reference point, which is the virtual center of a pupil with radius equal to zero. We refer this point as the linearly-guessed center. The experiments demonstrate that the linearly-guessed center provides much better recognition accuracy. <br /><br /> In addition to evaluating the pupil and limbus centers and proposing a new reference point for normalization, we reformulate the normalization problem as a minimization problem. The advantage of this formulation is that it is not restricted by the circular assumption used in the reference point approaches. The experimental results demonstrate that the proposed method performs better than the reference point approaches. <br /><br /> In addition, previous normalization approaches are based on transforming iris texture into a fixed-size rectangular block. In fact, the shape and size of normalized iris have not been investigated in details. In this thesis, we study the size parameter of traditional approaches and propose a dynamic normalization scheme, which transforms an iris based on radii of pupil and limbus. The experimental results demonstrate that the dynamic normalization scheme performs better than the previous approaches.
|
12 |
A Study of Segmentation and Normalization for Iris Recognition SystemsMohammadi Arvacheh, Ehsan January 2006 (has links)
Iris recognition systems capture an image from an individual's eye. The iris in the image is then segmented and normalized for feature extraction process. The performance of iris recognition systems highly depends on segmentation and normalization. For instance, even an effective feature extraction method would not be able to obtain useful information from an iris image that is not segmented or normalized properly. This thesis is to enhance the performance of segmentation and normalization processes in iris recognition systems to increase the overall accuracy. <br /><br /> The previous iris segmentation approaches assume that the boundary of pupil is a circle. However, according to our observation, circle cannot model this boundary accurately. To improve the quality of segmentation, a novel active contour is proposed to detect the irregular boundary of pupil. The method can successfully detect all the pupil boundaries in the CASIA database and increase the recognition accuracy. <br /><br /> Most previous normalization approaches employ polar coordinate system to transform iris. Transforming iris into polar coordinates requires a reference point as the polar origin. Since pupil and limbus are generally non-concentric, there are two natural choices, pupil center and limbus center. However, their performance differences have not been investigated so far. We also propose a reference point, which is the virtual center of a pupil with radius equal to zero. We refer this point as the linearly-guessed center. The experiments demonstrate that the linearly-guessed center provides much better recognition accuracy. <br /><br /> In addition to evaluating the pupil and limbus centers and proposing a new reference point for normalization, we reformulate the normalization problem as a minimization problem. The advantage of this formulation is that it is not restricted by the circular assumption used in the reference point approaches. The experimental results demonstrate that the proposed method performs better than the reference point approaches. <br /><br /> In addition, previous normalization approaches are based on transforming iris texture into a fixed-size rectangular block. In fact, the shape and size of normalized iris have not been investigated in details. In this thesis, we study the size parameter of traditional approaches and propose a dynamic normalization scheme, which transforms an iris based on radii of pupil and limbus. The experimental results demonstrate that the dynamic normalization scheme performs better than the previous approaches.
|
13 |
Normalization of microRNA expression levels in Quantitative RT-PCR arraysDeo, Ameya January 2010 (has links)
<p><strong>Background:</strong> Real-time quantitative Reverse Transcriptase Polymerase Chain Reaction (qRT-PCR) is recently used for characterization and expression analysis of miRNAs. The data from such experiments need effective analysis methods to produce reliable and high-quality data. For the miRNA prostate cancer qRT-PCR data used in this study, standard housekeeping normalization method fails due to non-stability of endogenous controls used. Therefore, identifying appropriate normalization method(s) for data analysis based on other data driven principles is an important aspect of this study.</p><p><strong>Results:</strong> In this study, different normalization methods were tested, which are available in the R packages <em>Affy</em> and <em>qpcrNorm</em> for normalization of the raw data. These methods reduce the technical variation and represent robust alternatives to the standard housekeeping normalization method. The performance of different normalization methods was evaluated statistically and compared against each other as well as with the standard housekeeping normalization method. The results suggest that <em>qpcrNorm</em> Quantile normalization method performs best for all methods tested.</p><p><strong>Conclusions:</strong> The <em>qpcrNorm</em> Quantile normalization method outperforms the other normalization methods and standard housekeeping normalization method, thus proving the hypothesis of the study. The data driven methods used in this study can be applied as standard procedures in cases where endogenous controls are not stable.</p>
|
14 |
Normalization of microRNA expression levels in Quantitative RT-PCR arraysDeo, Ameya January 2010 (has links)
Background: Real-time quantitative Reverse Transcriptase Polymerase Chain Reaction (qRT-PCR) is recently used for characterization and expression analysis of miRNAs. The data from such experiments need effective analysis methods to produce reliable and high-quality data. For the miRNA prostate cancer qRT-PCR data used in this study, standard housekeeping normalization method fails due to non-stability of endogenous controls used. Therefore, identifying appropriate normalization method(s) for data analysis based on other data driven principles is an important aspect of this study. Results: In this study, different normalization methods were tested, which are available in the R packages Affy and qpcrNorm for normalization of the raw data. These methods reduce the technical variation and represent robust alternatives to the standard housekeeping normalization method. The performance of different normalization methods was evaluated statistically and compared against each other as well as with the standard housekeeping normalization method. The results suggest that qpcrNorm Quantile normalization method performs best for all methods tested. Conclusions: The qpcrNorm Quantile normalization method outperforms the other normalization methods and standard housekeeping normalization method, thus proving the hypothesis of the study. The data driven methods used in this study can be applied as standard procedures in cases where endogenous controls are not stable.
|
15 |
För varje gång tror man att det är den sista gången... : En kvalitativ studie av kvarhållande mekanismer i biografiska skildringar av kvinnors erfarenheter av mäns våld i nära relation / Because every time you think it’s going to be the last one… : A qualitative studie of retaining mechanisms in biographic depictions of female experiences of domestic violence perpetrated by malesElmqvist, Lisa, Johnsson, Sara January 2017 (has links)
This thesis analyses women’s stories and experiences of living in a relationship with domestic violence. The study is done in order to gain a deeper understanding of what the retaining mechanisms are that prevents women from leaving their partner. The research is carried out by the usage of qualitative textual analysis of autobiographical novels. With the support of the theories of shame and the normalization process it is possible to accentuate how women are staying in their relationships because of the mental degradation caused by the man. The retaining mechanisms are numerous and complex and both socio-economic and psychological factors contribute to that women stay in the relationship, which previous research also has concluded. This research concludes that the retaining mechanisms evolve and change throughout the relationship. The women’s reason for staying in the relationship is in the beginning their love for the man but later on in the relationship it evolves further into embracing the reality of the man and making the reasons for the violence legitimate
|
16 |
Utilizing Universal Probability of Expression Code (UPC) to Identify Disrupted Pathways in Cancer SamplesWithers, Michelle Rachel 03 March 2011 (has links) (PDF)
Understanding the role of deregulated biological pathways in cancer samples has the potential to improve cancer treatment, making it more effective by selecting treatments that reverse the biological cause of the cancer. One of the challenges with pathway analysis is identifying a deregulated pathway in a given sample. This project develops the Universal Probability of Expression Code (UPC), a profile of a single deregulated biological path- way, and projects it into a cancer cell to determine if it is present. One of the benefits of this method is that rather than use information from a single over-expressed gene, it pro- vides a profile of multiple genes, which has been shown by Sjoblom et al. (2006) and Wood et al. (2007) to be more effective. The UPC uses a novel normalization and summarization approach to characterize a deregulated pathway using only data from the array (Mixture model-based analysis of expression arrays, MMAX), making it applicable to all microarray platforms, unlike other methods. When compared to both Affymetrix's PMA calls (Hubbell, Liu, and Mei 2002) and Barcoding (Zilliox and Irizarry 2007), it performs comparably.
|
17 |
Databasdesign: Nulägesanalys av normaliseringWesslén Weiler, Johannes, Öhrn, Emelie January 2016 (has links)
År 1970 introducerades normalisering med syfte att organisera data i relationsdatabaser för att undvika redundant data och reducera risker för anomalier. Idag finns indikationer på att en mer nyanserad bild av normalisering behövs då dagens databaser ställs inför nya utmaningar och krav. Det här arbetet utförs i form av en fallstudie där en analys av tre databaser inom olika verksamheter genomförs. Med utgångspunkt i normalformerna genomförs en explorativ analys för att identifiera vilka aspekter som påverkar normalisering i industrin. Slutsatsen av arbetet är att det är svårt för en oberoende part till databasen att avgöra och tolka normalformernas uppfyllnad. Faktorer som påverkar normalisering av databaser är: utvecklarens intuition, användarens påverkan av datakvalitet samt den tekniska skuld som quickfixes orsakar. / Normalization was first introduced in 1970 with the purpose to organize data within relational databases in a way to avoid data redundancy and reduce the number of anomalies. As databases are facing new challenges and requirements, indications have been identified which points to a need for a more detailed view of normalization. This work is the outcome of a case study where three databases are analyzed. With the normal forms as starting point, an explorative analysis is made with the goal to identify different aspects that affects the way normalization is conducted within the industry. The conclusion is that it is difficult for an outsider to the database to interpret and determine whether the normal forms are fulfilled or not. Aspects affecting normalization are: the developer's intuition, users' impact on data quality and the technical debt that quickfixes creates.
|
18 |
MicroRNA expression profiling in endometrial adenocarcinomaJurcevic, Sanja January 2015 (has links)
No description available.
|
19 |
General Auditory Model of Adaptive Perception of SpeechVitela, Antonia David January 2012 (has links)
One of the fundamental challenges for communication by speech is the variability in speech production/acoustics. Talkers vary in the size and shape of their vocal tract, in dialect, and in speaking mannerisms. These differences all impact the acoustic output. Despite this lack of invariance in the acoustic signal, listeners can correctly perceive the speech of many different talkers. This ability to adapt one's perception to the particular acoustic structure of a talker has been investigated for over fifty years. The prevailing explanation for this phenomenon is that listeners construct talker-specific representations that can serve as referents for subsequent speech sounds. Specifically, it is thought that listeners may either be creating mappings between acoustics and phonemes or extracting the vocal tract anatomy and shape for each individual talker. This research focuses on an alternative explanation. A separate line of work has demonstrated that much of the variance between talkers' productions can be captured in their neutral vocal tract shape (that is, the average shape of their vocal tract across multiple vowel productions). The current model tested is that listeners compute an average spectrum (long term average spectrum - LTAS) of a talker's speech and use it as a referent. If this LTAS resembles the acoustic output of the neutral vocal tract shape - the neutral vowel - then it could accommodate some of the talker based variability. The LTAS model results in four main hypotheses: 1) during carrier phrases, listeners compute an LTAS for the talker; 2) this LTAS resembles the spectrum of the neutral vowel; 3) listeners represent subsequent targets relative to this LTAS referent; 4) such a representation reduces talker-specific acoustic variability. The goal of this project was to further develop and test the predictions arising from these hypotheses. Results suggest that the LTAS model needs to be further investigated, as the simple model proposed does not explain the effects found across all studies.
|
20 |
Normalizing Object-oriented Class Styles in JavaScriptGama, WIDD 22 January 2013 (has links)
JavaScript is the most widely used client-side scripting language, and has become
increasingly popular as a crucial component of the AJAX technology. JavaScript
is a dynamic, weakly typed, multi-paradigm programming language that supports
object-oriented, imperative, and functional programming styles. Many di erent programmers
appreciate this
exibility when implementing complex and interactive web
applications. This wide range of possible styles can hinder program comprehension
and make maintenance di cult, especially in large projects involving many di erent
programmers. A particular problem is the several di erent ways in which objectoriented
classes can be expressed in JavaScript. In this work we aim at enhancing
the maintainability of object-oriented JavaScript applications by automatically normalizing
the representation of classes to a single model. We begin by analyzing the
di erent ways that JavaScript programmers have represented the class concept, identifying
and cataloguing the di erent class patterns used in the language. We choose
one of these, and show how it is possible to automatically migrate JavaScript applications
from any mix of class styles to the chosen one, making it easier to understand
and maintain object-oriented JavaScript programs. / Thesis (Master, Computing) -- Queen's University, 2013-01-22 09:29:10.693
|
Page generated in 0.1019 seconds