• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 158
  • 137
  • 115
  • 84
  • 34
  • 17
  • 7
  • 6
  • 6
  • 4
  • 3
  • 3
  • 3
  • 2
  • 2
  • Tagged with
  • 611
  • 137
  • 74
  • 72
  • 70
  • 50
  • 46
  • 46
  • 41
  • 37
  • 36
  • 35
  • 32
  • 32
  • 30
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Abominable virtues and cured faults : disability, deviance, and the double voice in the fiction of L.M. Montgomery

Hingston, Kylee-Anne 25 July 2006
This thesis examines the double-voiced representations of disability and illness in several works by Montgomery, the <i>Emily</i> trilogy (1923, 1925, 1927), the novel <i>The Blue Castle </i>(1926), the novella <i>Kilmeny of the Orchard </i>(1910), and two short stories, <i>The Tryst of the White Lady</i> (1922) and <i>Some Fools and a Saint</i> (published in 1931 but written in 1924). Although most of Montgomerys fiction in some way discusses illness and disability, often through secondary characters with disabilities, these works in particular feature disability as a central issue and use their heroes and heroines disabilities to impel the plots. While with one voice these works comply with conventional uses of disability in the love story genre, with another they criticize those very conventions. Using disability theory to analyze the fictions double voice, my thesis reveals that the ambiguity created by the internal conflict in the texts evades reasserting the binary relationship which privileges ability and devalues disability. <p> This thesis uses disability theory to examine the double-voiced representation of disability in the fiction of L.M. Montgomery. Bakhtin describes the double voice as an utterance which has two speakers at the same time and expresses simultaneously two different intentions: the direct intention of the character who is speaking and the refracted intention of the author (324). In this thesis, however, I perceive the double voice not as the difference between the voices of the speaking character or narrator and of the authors intention. Instead, I will approach the double voice as simultaneous expressions of conflicting representations, whether or not the author intends them. These voices within the double voice internally dialogue with each other to reflect changing social attitudes toward disability. By applying disability theories, such as those by critics David Mitchell and Sharon Snyder, Susan Sontag, Martha Stoddard Holmes, and Rosemarie Garland Thomson, that assess how texts invoke disability as a literary technique, this thesis shows that the narrative structure of Montgomerys fiction promotes the use of disability as a literary and social construct, while its subtext challenges the investment of metaphoric meaning in disability.
2

Abominable virtues and cured faults : disability, deviance, and the double voice in the fiction of L.M. Montgomery

Hingston, Kylee-Anne 25 July 2006 (has links)
This thesis examines the double-voiced representations of disability and illness in several works by Montgomery, the <i>Emily</i> trilogy (1923, 1925, 1927), the novel <i>The Blue Castle </i>(1926), the novella <i>Kilmeny of the Orchard </i>(1910), and two short stories, <i>The Tryst of the White Lady</i> (1922) and <i>Some Fools and a Saint</i> (published in 1931 but written in 1924). Although most of Montgomerys fiction in some way discusses illness and disability, often through secondary characters with disabilities, these works in particular feature disability as a central issue and use their heroes and heroines disabilities to impel the plots. While with one voice these works comply with conventional uses of disability in the love story genre, with another they criticize those very conventions. Using disability theory to analyze the fictions double voice, my thesis reveals that the ambiguity created by the internal conflict in the texts evades reasserting the binary relationship which privileges ability and devalues disability. <p> This thesis uses disability theory to examine the double-voiced representation of disability in the fiction of L.M. Montgomery. Bakhtin describes the double voice as an utterance which has two speakers at the same time and expresses simultaneously two different intentions: the direct intention of the character who is speaking and the refracted intention of the author (324). In this thesis, however, I perceive the double voice not as the difference between the voices of the speaking character or narrator and of the authors intention. Instead, I will approach the double voice as simultaneous expressions of conflicting representations, whether or not the author intends them. These voices within the double voice internally dialogue with each other to reflect changing social attitudes toward disability. By applying disability theories, such as those by critics David Mitchell and Sharon Snyder, Susan Sontag, Martha Stoddard Holmes, and Rosemarie Garland Thomson, that assess how texts invoke disability as a literary technique, this thesis shows that the narrative structure of Montgomerys fiction promotes the use of disability as a literary and social construct, while its subtext challenges the investment of metaphoric meaning in disability.
3

Calibration of Time Domain Network Analyzer Measurements

Su, Kuo-Ying 06 July 2000 (has links)
none
4

Visual Attention and the Role of Normalization

Ni, Amy 12 December 2012 (has links)
Visual perception can be improved by the intentional allocation of attention to specific visual components. This “top-down” attention can improve perception of specific locations in space, or of specific visual features at all locations in space. Both spatial and feature attention are thought to involve the feedback of attention signals from higher cortical areas to visual cortex, where it modulates the firing rates of specific sensory neurons. However, the mechanisms that determine how top-down attention signals modulate the firing rates of visual neurons are not fully understood. Recently, a sensory mechanism called normalization has been implicated in mediating neuronal modulations by attention. Normalization is a form of gain control that adjusts the dynamic range of neuronal responses, particularly when more than one stimulus lies within a neuron's receptive field. Models of attention propose that this sensory mechanism affects how attention signals modulate the firing rates of sensory neurons, but it remains unclear exactly how normalization is related to the different forms of top-down attention. Here we use single unit electrophysiological recordings from the middle temporal area (MT) of rhesus monkeys to measure the firing rates of sensory neurons. We ask the monkeys to perform a behavioral task that directs their attention to a particular location or feature, allowing us to independently measure modulations to firing rates due to normalization, spatial attention, or feature attention. We report that variations in the strength of normalization across neurons can be explained by an extension of conventional normalization: tuned normalization. Modulation by spatial attention depends greatly on the extent to which the normalization of a neuron is tuned, explaining a neuron-by-neuron correlation between spatial attention and normalization modulation strengths. Tuned normalization also explains a pronounced asymmetry in spatial attention modulations, in which neurons are more modulated by attention to their preferred, versus their non-preferred, stimulus. However, feature attention differs from spatial attention in its relationship to the normalization mechanism. We conclude that while spatial and feature attention appear to be mediated by a common top-down attention mechanism, they are differently influenced by the sensory mechanism of normalization.
5

A Comparison of Filtering and Normalization Methods in the Statistical Analysis of Gene Expression Experiments

Speicher, Mackenzie Rosa Marie January 2020 (has links)
Both microarray and RNA-seq technologies are powerful tools which are commonly used in differential expression (DE) analysis. Gene expression levels are compared across treatment groups to determine which genes are differentially expressed. With both technologies, filtering and normalization are important steps in data analysis. In this thesis, real datasets are used to compare current analysis methods of two-color microarray and RNA-seq experiments. A variety of filtering, normalization and statistical approaches are evaluated. The results of this study show that although there is still no widely accepted method for the analysis of these types of experiments, the method chosen can largely impact the number of genes that are declared to be differentially expressed.
6

From Acoustics to Articulation : Study of the acoustic-articulatory relationship along with methods to normalize and adapt to variations in production across different speakers

Ananthakrishnan, Gopal January 2011 (has links)
The focus of this thesis is the relationship between the articulation ofspeech and the acoustics of produced speech. There are several problems thatare encountered in understanding this relationship, given the non-linearity,variance and non-uniqueness in the mapping, as well as the differences thatexist in the size and shape of the articulators, and consequently the acoustics,for different speakers. The thesis covers mainly four topics pertaining to thearticulation and acoustics of speech.The first part of the thesis deals with variations among different speakersin the articulation of phonemes. While the speakers differ physically in theshape of their articulators and vocal tracts, the study tries to extract articula-tion strategies that are common to different speakers. Using multi-way linearanalysis methods, the study extracts articulatory parameters which can beused to estimate unknown articulations of phonemes made by one speaker;knowing other articulations made by the same speaker and those unknown ar-ticulations made by other speakers of the language. At the same time, a novelmethod to select the number of articulatory model parameters, as well as thearticulations that are representative of a speaker’s articulatory repertoire, issuggested.The second part is devoted to the study of uncertainty in the acoustic-to-articulatory mapping, specifically non-uniqueness in the mapping. Severalstudies in the past have shown that human beings are capable of producing agiven phoneme using non-unique articulatory configurations, when the artic-ulators are constrained. This was also demonstrated by synthesizing soundsusing theoretical articulatory models. The studies in this part of the the-sis investigate the existence of non-uniqueness in unconstrained read speech.This is carried out using a database of acoustic signals recorded synchronouslyalong with the positions of electromagnetic coils placed on selected points onthe lips, jaws, tongue and velum. This part, thus, largely devotes itself todescribing techniques that can be used to study non-uniqueness in the sta-tistical sense, using such a database. The results indicate that the acousticvectors corresponding to some frames in all the phonemes in the databasecan be mapped onto non-unique articulatory distributions. The predictabil-ity of these non-unique frames is investigated, along with verifying whetherapplying continuity constraints can resolve this non-uniqueness.The third part proposes several novel methods of looking at acoustic-articulatory relationships in the context of acoustic-to-articulatory inversion.The proposed methods include explicit modeling of non-uniqueness usingcross-modal Gaussian mixture modeling, as well as modeling the mappingas local regressions. Another innovative approach towards the mapping prob-lem has also been described in the form of relating articulatory and acousticgestures. Definitions and methods to obtain such gestures are presented alongwith an analysis of the gestures for different phoneme types. The relationshipbetween the acoustic and articulatory gestures is also outlined. A method toconduct acoustic-to-articulatory inverse mapping is also suggested, along withva method to evaluate it. An application of acoustic-to-articulatory inversionto improve speech recognition is also described in this part of the thesis.The final part of the thesis deals with problems related to modeling infantsacquiring the ability to speak; the model utilizing an articulatory synthesizeradapted to infant vocal tract sizes. The main problem addressed is related tomodeling how infants acquire acoustic correlates that are normalized betweeninfants and adults. A second problem of how infants decipher the number ofdegrees of articulatory freedom is also partially addressed. The main contri-bution is a realistic model which shows how an infant can learn the mappingbetween the acoustics produced during the babbling phase and the acous-tics heard from the adults. The knowledge required to map correspondingadult-infant speech sounds is shown to be learnt without the total numberof categories or one-one correspondences being specified explicitly. Instead,the model learns these features indirectly based on an overall approval rating,provided by a simulation of adult perception, on the basis of the imitation ofadult utterances by the infant model.Thus, the thesis tries to cover different aspects of the relationship betweenarticulation and acoustics of speech in the context of variations for differentspeakers and ages. Although not providing complete solutions, the thesis pro-poses novel directions for approaching the problem, with pointers to solutionsin some contexts. / QC 20111222 / Computer-Animated language Teachers (CALATea), Audio-Visual Speech Inversion (ASPI)
7

A Unified Information Theoretic Framework for Pair- and Group-wise Registration of Medical Images

Zollei, Lilla 25 January 2006 (has links)
The field of medical image analysis has been rapidly growing for the past two decades. Besides a significant growth in computational power, scanner performance, and storage facilities, this acceleration is partially due to an unprecedented increase in the amount of data sets accessible for researchers. Medical experts traditionally rely on manual comparisons of images, but the abundance of information now available makes this task increasingly difficult. Such a challenge prompts for more automation in processing the images.In order to carry out any sort of comparison among multiple medical images, onefrequently needs to identify the proper correspondence between them. This step allows us to follow the changes that happen to anatomy throughout a time interval, to identify differences between individuals, or to acquire complementary information from different data modalities. Registration achieves such a correspondence. In this dissertation we focus on the unified analysis and characterization of statistical registration approaches.We formulate and interpret a select group of pair-wise registration methods in the context of a unified statistical and information theoretic framework. This clarifies the implicit assumptions of each method and yields a better understanding of their relative strengths and weaknesses. This guides us to a new registration algorithm that incorporates the advantages of the previously described methods. Next we extend the unified formulation with analysis of the group-wise registration algorithms that align a population as opposed to pairs of data sets. Finally, we present our group-wise registration framework, stochastic congealing. The algorithm runs in a simultaneous fashion, with every member of the population approaching the central tendency of the collection at the same time. It eliminates the need for selecting a particular referenceframe a priori, resulting in a non-biased estimate of a digital template. Our algorithm adopts an information theoretic objective function which is optimized via a gradientbased stochastic approximation process embedded in a multi-resolution setting. We demonstrate the accuracy and performance characteristics of stochastic congealing via experiments on both synthetic and real images. / PhD thesis
8

Protein Set for Normalization of Quantitative Mass Spectrometry Data

Lee, Wooram 20 January 2014 (has links)
Mass spectrometry has been recognized as a prominent analytical technique for peptide and protein identification and quantitation. With the advent of soft ionization methods, such as electrospray ionization and matrix assisted laser desorption/ionization, mass spectrometry has opened a new era for protein and proteome analysis. Due to its high-throughput and high-resolution character, along with the development of powerful data analysis software tools, mass spectrometry has become the most popular method for quantitative proteomics. Stable isotope labeling and label-free quantitation methods are widely used in quantitative mass spectrometry experiments. Proteins with stable expression level and key roles in basic cellular functions such as actin, tubulin and glyceraldehyde-3-phosphate dehydrogenase, are frequently utilized as internal controls in biological experiments. However, recent studies have shown that the expression level of such commonly used housekeeping proteins is dependent on cell type, cell cycle or disease status, and that it can change as a result of a biochemical stimulation. Such phenomena can, therefore, substantially compromise the use of these proteins for data validation. In this work, we propose a novel set of proteins for quantitative mass spectrometry that can be used either for data normalization or validation purposes. The protein set was generated from cell cycle experiments performed with MCF-7, an estrogen receptor positive breast cancer cell line, and MCF-10A, a non-tumorigenic immortalized breast cell line. The protein set was selected from a list of 3700 proteins identified in the different cellular sub-fractions and cell cycle stages of MCF-7/MCF-10A cells, based on the stability of spectral count data (CV<30 %) generated with an LTQ ion trap mass spectrometer. A total of 34 proteins qualified as endogenous standards for the nuclear, and 75 for the cytoplasmic cell fractions, respectively. The validation of these proteins was performed with a complementary, Her2+, SKBR-3 cell line. Based on the outcome of these experiments, it is anticipated that the proposed protein set will find applicability for data normalization/validation in a broader range of mechanistic biological studies that involve the use of cell lines. / Master of Science
9

Effect of image variation on computer aided detection systems / Betydelsen av normalisering av bilder vid datorstödd bildanalys

Rabbani, Seyedeh Parisa January 2013 (has links)
Computer Aided Detection (CAD) systems are expecting to gain significant importance in terms of reducing the work load of radiologists and enabling the large screening programs. A large share of CAD systems are based on learning from examples, to enables the decision making between the images with or without disease. Images are simplified to numerical descriptors (features vectors) and the system is trained with these features. The common practical problem with CAD systems is training the system with a data from a specific source and testing it on a data from a different source; the variations between sources usually affect the CAD system function. The possible solutions for this problem are (1) normalizing images to make them look more equal, (2) choosing less variation sensitive features and (3) modifying the classifier so that it classifies the data from different sources more accurately. In this project the effect of image variations on the developed CAD system on chest radio graphs for Tuberculosis is studied at Diagnostic Image Analysis Group. Tuberculosis is one of the major healthcare problems in some parts of the world (1.3 million deaths in 2007) [1]. Although the system has a great performance on the train and test data from the same source, using different sub dataset for training and testing the system does not lead to the same result. To limit the effect of image variation of the CAD systems three different approaches are applied for normalizing the images: (1) Simple normalization, (2) local normalization and (3) multi band local normalization. All three approaches enhance the performance of the system in case of various sub datasets for training and testing purposes. According to the improvement achieved by applying normalization it is suggested as a solution for the stated problem above. Although the outcome of this study has satisfactory result, there is always room for further investigations and studies; in specific testing different approaches for finding less variation sensitive features and modifying the classification procedure to a more variation tolerant process.
10

Prezentace novináře v normalizační filmové tvorbě / Presentation of a journalist in film production of normalization

Novotná, Petra January 2014 (has links)
The roles of journalists in motion pictures are diverse and they are to be found amongst leading or supporting characters as well as amongst heroes or villains. Journalists also appear in different film genres from sci-fi movies to musicals. In my diploma thesis I analyse how the makers of the films in normalisation presented journalists. Whether they used them as leading or supporting characters or whether they showed them as heroes or villains. In my thesis I am aiming to describe the type of journalists which appeared in the film production in normalisation in Czechoslovakia.

Page generated in 0.1127 seconds