• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 6
  • 6
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

A Comparison of Filtering and Normalization Methods in the Statistical Analysis of Gene Expression Experiments

Speicher, Mackenzie Rosa Marie January 2020 (has links)
Both microarray and RNA-seq technologies are powerful tools which are commonly used in differential expression (DE) analysis. Gene expression levels are compared across treatment groups to determine which genes are differentially expressed. With both technologies, filtering and normalization are important steps in data analysis. In this thesis, real datasets are used to compare current analysis methods of two-color microarray and RNA-seq experiments. A variety of filtering, normalization and statistical approaches are evaluated. The results of this study show that although there is still no widely accepted method for the analysis of these types of experiments, the method chosen can largely impact the number of genes that are declared to be differentially expressed.
2

Daugiamačių duomenų vizualizavimo rezultatų priklausomybė nuo duomenų aibių normavimo būdų / Dependence of the multidimensional data visualization results on data set normalization

Švaibovič, Natalja 12 July 2010 (has links)
Šiame magistro diplominiame darbe nagrinėjamas dirbtinių neuroninių tinklų taikymas daugiamačiams duomenims vizualizuoti. Darbe apžvelgtos kelios duomenų vizualizavimo strategijos, išnagrinėti keli duomenų normavimo būdai. Detaliai ištirtas SAMANN algoritmas, skirtas daugiamatės erdvės duomenims vizualizuoti į mažesnio matavimo erdvę. Su C++ programavimo kalba sukurtos keturių normavimo būdų ir SAMANN neuroninio tinklo realizavimo programos. Atlikti tokie eksperimentai: keturių duomenų aibių vektorių normavimas 4-iais būdais, daugiamačių duomenų vizualizavimas plokštumoje, SAMANN neuroninio tinklo paklaidos skaičiavimas ir rezultatų atvaizdavimas plokštumoje. Eksperimentai atlikti su realiomis ir dirbtine duomenų aibėmis. Nustatyta daugiamačių duomenų vizualizavimo tikslumo (projekcijos paklaidos) priklausomybė nuo iteracijų skaičiaus ir mokymo greičio parametro reikšmės, o taip pat nuo pradinės duomenų aibės normavimo būdų. / The application of artificial neural networks for multidimensional data visualization is investigated in this master‘s thesis. Several strategies for data visualization and some data normalization methods are reviewed. A realization of SAMANN algorithm for multidimensional data visualization is investigated in detail. Programs for data normalization methods and SAMANN neural network realization have been developed using C++ programming language. Some experiments have been performed and presented in this theses: four methods for normalization of multidimensional data sets, multidimensional data visualization using SAMANN neural network, calculation of the projection (SAMANN) error. Three real and one artificial data sets have been used in the experiments. Dependences of the multidimensional data sets projection error on the iteration number and normalization methods have been investigated and presented in this theses too. Summarized experimental results and conclusions are presented.
3

Comparison of Normalization Methods in Microarray Analysis

Yang, Rong 04 1900 (has links)
<p> DNA microarrays can measure the gene expression of thousands of genes at a time to identify differentially expressed genes. The Affymetrix GeneChip system is a platform for the high-density oligonucleotide microarray to measure gene expression using hundreds of thousands of 25-mer oligonucleotide probes.</p> <p> To deal with Affymetrix microarray data, there are three stages of preprocessing to produce gene expression measurements/values. These are background correction, normalization and summarization. At each stage, numerous methods have been developed.</p> <p> Our study is based on Affymetrix MG_U74Av2 chip with 12488 probe sets. Two strains of mice called NOR and NOR.NOD_Idd4/11 mouse are hybridized for the experiment. We apply a number of commonly used and state-of-art normalization methods to the data set, thus compute the expression measurements for different methods. The major methods we discuss include Robust Multi-chip Average (RMA), MAS 5.0, GCRMA, PLIER and dChip.</p> <p> Comparisons in terms of correlation coefficient, pairwise expression measures plot, fold change and Significance Analysis of Microarray (SAM) are conducted.</p> / Thesis / Master of Science (MSc)
4

Comparação entre métodos de normalização de iluminação utilizados para melhorar a taxa do reconhecimento facial / Comparison between illumination normalization methods used to improve the rate of facial recognition

Michelle Magalhães Mendonça 25 June 2008 (has links)
Condições distintas de iluminação numa imagem podem produzir representações desiguais do mesmo objeto, dificultando o processo de segmentação e reconhecimento de padrões, incluindo o reconhecimento facial. Devido a isso, a distribuição de iluminação numa imagem é considerada de grande importância, e novos algoritmos de normalização utilizando técnicas mais recentes ainda vêm sendo pesquisados. O objetivo dessa pesquisa foi o de avaliar os seguintes algoritmos de normalização da iluminação encontrados na literatura, que obtiveram bons resultado no reconhecimento de faces: LogAbout, variação do filtro homomórfico e método baseado em wavelets. O objetivo foi o de identificar o método de normalização da iluminação que resulta na melhor taxa de reconhecimento facial. Os algoritmos de reconhecimento utilizados foram: auto-faces, PCA (Principal Component Analyses) com rede neural LVQ (Learning Vector Quantization) e wavelets com rede neural MLP (Multilayer Perceptron). Como entrada, foram utilizadas imagens do banco Yale, que foram divididas em três subconjuntos. Os resultados mostraram que o método de normalização da iluminação que utiliza wavelet e LogAbout foram os que apresentaram melhoria significativa no reconhecimento facial. Os resultados também evidenciaram que, de uma maneira geral, com a utilização dos métodos de normalização da iluminação, obtém-se uma melhor taxa do reconhecimento facial, exceto para o método de normalização variação do filtro homomórfico com os algoritmos de reconhecimento facial auto-faces e wavelet com rede neural MLP. / Distinct lighting conditions in an image can produce unequal representations of the same object, compromising segmentation and pattern recognition processes, including facial recognition. Hence, the lighting distribution on an image is considered of great importance, and normalization algorithms using new techniques have still been researched. This research aims to evaluate the following illumination normalization algorithms found in literature: LogAbout, variation of homomorphic filter and wavelet based method. The main interest was to find out the illumination normalization method which improves the facial recognition rate. The algorithms used for face recognition were: eigenfaces, PCA (Principal Component Analysis) with LVQ neural network and wavelets with MLP (Multilayer Perceptron) neural network. Images from Yale Face Database B, divided into three subsets have been used. The results show that the wavelet and LogAbout technique provided the best facial recognition rate. Experiments showed that the illumination normalization methods, in general, improve the facial recognition rate, except for the variation of homomorphic filter technique with the algorithms: eigenfaces and PCA with LVQ.
5

Comparação entre métodos de normalização de iluminação utilizados para melhorar a taxa do reconhecimento facial / Comparison between illumination normalization methods used to improve the rate of facial recognition

Mendonça, Michelle Magalhães 25 June 2008 (has links)
Condições distintas de iluminação numa imagem podem produzir representações desiguais do mesmo objeto, dificultando o processo de segmentação e reconhecimento de padrões, incluindo o reconhecimento facial. Devido a isso, a distribuição de iluminação numa imagem é considerada de grande importância, e novos algoritmos de normalização utilizando técnicas mais recentes ainda vêm sendo pesquisados. O objetivo dessa pesquisa foi o de avaliar os seguintes algoritmos de normalização da iluminação encontrados na literatura, que obtiveram bons resultado no reconhecimento de faces: LogAbout, variação do filtro homomórfico e método baseado em wavelets. O objetivo foi o de identificar o método de normalização da iluminação que resulta na melhor taxa de reconhecimento facial. Os algoritmos de reconhecimento utilizados foram: auto-faces, PCA (Principal Component Analyses) com rede neural LVQ (Learning Vector Quantization) e wavelets com rede neural MLP (Multilayer Perceptron). Como entrada, foram utilizadas imagens do banco Yale, que foram divididas em três subconjuntos. Os resultados mostraram que o método de normalização da iluminação que utiliza wavelet e LogAbout foram os que apresentaram melhoria significativa no reconhecimento facial. Os resultados também evidenciaram que, de uma maneira geral, com a utilização dos métodos de normalização da iluminação, obtém-se uma melhor taxa do reconhecimento facial, exceto para o método de normalização variação do filtro homomórfico com os algoritmos de reconhecimento facial auto-faces e wavelet com rede neural MLP. / Distinct lighting conditions in an image can produce unequal representations of the same object, compromising segmentation and pattern recognition processes, including facial recognition. Hence, the lighting distribution on an image is considered of great importance, and normalization algorithms using new techniques have still been researched. This research aims to evaluate the following illumination normalization algorithms found in literature: LogAbout, variation of homomorphic filter and wavelet based method. The main interest was to find out the illumination normalization method which improves the facial recognition rate. The algorithms used for face recognition were: eigenfaces, PCA (Principal Component Analysis) with LVQ neural network and wavelets with MLP (Multilayer Perceptron) neural network. Images from Yale Face Database B, divided into three subsets have been used. The results show that the wavelet and LogAbout technique provided the best facial recognition rate. Experiments showed that the illumination normalization methods, in general, improve the facial recognition rate, except for the variation of homomorphic filter technique with the algorithms: eigenfaces and PCA with LVQ.
6

A longitudinal study of the oral properties of the French-English interlanguage : a quantitative approach of the acquisition of the /ɪ/-/iː/ and /ʊ/-/uː/ contrasts / Etude longitudinale des propriétés orales de l'interlangue français-anglais

Méli, Adrien 04 April 2018 (has links)
Ce travail entreprend d'évaluer l'évolution de l'acquisition phonologique par des étudiants français des contrastes anglais /ɪ/-/i:/ et /ʊ/-/u:/. Le corpus étudié provient d'enregistrements de conversations spontanées menées avec des étudiants natifs. 12 étudiants, 9 femmes et 3 hommes,ont été suivis lors de 4 sessions espacées chacune d'un intervalle de six mois. L'approche adoptée est résolument quantitative, et agnostique quant aux théories d'acquisition d'une deuxième langue (par exemple Flege 2005, Best 1995,Kuhl 2008). Afin d'estimer les éventuels changements de prononciation, une procédure automatique d'alignement et d'extraction des données acoustiques a été conçue à partir du logiciel PRAAT (Boersma 2001). Dans un premier temps, deux autres logiciels (SPPAS et P2FA, Bigi 2012 et Yuan &Liberman 2008) avaient aligné les transcriptions des enregistrements au phonème près. Plus de 90 000 voyelles ont ainsi été analysées. Les données extraites sont constituées d'informations telles que le nombre de syllabes du mot, de sa transcription acoustique dans le dictionnaire, de la structure syllabique, des phonèmes suivant et précédant la voyelle, de leur lieu et manière d'articulation, de leur appartenance ou non au même mot, mais surtout des relevés formantiques de F0, F1, F2, F3 et F4. Ces relevés formantiques ont été effectués à chaque pourcentage de la durée de la voyelle afin de pouvoir tenir compte des influences des environnements consonantiques sur ces formants. Par ailleurs, des théories telles que le changement spectral inhérent aux voyelles (Nearey & Assmann(1986), Morrison & Nearey (2006), Hillenbrand (2012),Morrison (2012)), ou des méthodes de modélisation du signal telles que la transformation cosinoïdale discrète(Harrington 2010) requièrent que soient relevées les valeurs formantiques des voyelles tout au long de leur durée. Sont successivement étudiées la fiabilité de l'extraction automatique, les distributions statistiques des valeurs formantiques de chaque voyelle et les méthodes de normalisation appropriées aux conversations spontanées. Les différences entre les locuteurs sont ensuite évaluées en analysant tour à tour et après normalisation les changements spectraux, les valeurs formantiques à la moitié de la durée de la voyelle et les transformations cosinoïdales. Les méthodes déployées sont les k plus proches voisins, les analyses discriminantes quadratiques et linéaires, ainsi que les régressions linéaires à effets mixtes. Une conclusion temporaire de ce travail est que l'acquisition du contraste/ɪ/-/i:/ semble plus robuste que celle de /ʊ/-/u:/. / This study undertakes to assess the evolution of the phonological acquisition of the English /ɪ/-/i:/ and /ʊ/-/u:/ contrasts by French students. The corpus is made up of recordings of spontaneous conversations with native speakers. 12 students, 9 females and 3 males, were recorded over 4 sessions in six-month intervals. The approach adopted here is resolutely quantitative, and agnostic with respect to theories of second language acquisition such as Flege's, Best's or Kuhl's. In order to assess the potential changes in pronunciations, an automatic procedure of alignment and extraction has been devised, based on PRAAT (Boersma 2001). Phonemic and word alignments had been carried out with SPPAS (Bigi 2012) and P2FA (Yuan & Liberman 2008) beforehand. More than 90,000 vowels were thus collected and analysed. The extracted data consist of information such as the number of syllables in the word, the transcription of its dictionary pronunciation, the structure of the syllable the vowel appears in, of the preceding and succeeding phonemes, their places and manners of articulation, whether they belong to the same word or not, but also especially of the F0, F1, F2, F3 and F4 formant values. These values were collected at each centile of the duration of the vowel, in order to be able to take into account of the influences of consonantal environments. Besides, theories such as vowel-inherent spectral changes (Nearey & Assmann (1986), Morrison & Nearey (2006), Hillenbrand (2012), Morrison (2012)), and methods of signal modelling such as discrete cosine transforms (Harrington 2010) need formant values all throughout the duration of the vowel. Then the reliability of the automatic procedure, the per-vowel statistical distributions of the formant values, and the normalization methods appropriate to spontaneous speech are studied in turn. Speaker differences are assessed by analysing spectral changes, mid-temporal formant values and discrete cosine transforms with normalized values. The methods resorted to are the k nearest neighbours, linear and quadratic discriminant analyses and linear mixed effects regressions. A temporary conclusion is that the acquisition of the /ɪ/-/i:/ contrast seems more robust than that of the /ʊ/-/u:/ contrast.

Page generated in 0.1297 seconds