• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 302
  • 139
  • 34
  • 31
  • 23
  • 19
  • 16
  • 16
  • 14
  • 12
  • 7
  • 5
  • 4
  • 3
  • 2
  • Tagged with
  • 741
  • 741
  • 741
  • 141
  • 118
  • 112
  • 102
  • 86
  • 68
  • 65
  • 59
  • 57
  • 55
  • 54
  • 52
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
81

On the demixing and self-organized formation of neural population responses / Sur l'analyse et l'auto-organisation des réponses de populations neuronales

Brendel, Wieland 29 October 2014 (has links)
Les neurones dans les zones corticales sup´erieures, telles que le cortex pr´efrontal, r´epondent `a unelarge gamme de variables sensorielles et motrices et affichent donc ce qu’on appelle une s´electivit´emixte. L’information repr´esent´ee par ces zones, ainsi que la fa¸con dont elles la repr´esentent, sonttr`es mal comprises du fait de la diversit´e et de la complexit´e des r´eponses unicellulaires. Dansla premi`ere partie de la pr´esente th`ese, nous introduisons une nouvelle technique de r´eductionde dimensionnalit´e, l’analyse en composantes principales dissoci´ees (demixed principal componentanalysis, dPCA). Cette m´ethode permet une extraction automatique des caract´eristiques essentiellesde l’activit´e complexe d’une population. Nous r´eanalysons quatre ensembles de donn´eesenr´egistr´ees dans des esp`eces diff´erentes (rats et macaques), des zones corticales diff´erentes (cortexpr´efrontal et cortex orbitofrontal), et lors de tˆaches exp´erimentales diff´erentes (m´emoire de travailtactile et visuo-spatiale, discrimination et cat´egorisation d’odeurs). Dans chacun de ces cas, ladPCA permet de fournir une visualisation compacte des donn´ees qui synth´etise les caract´eristiquespertinentes de la r´eponse de la population neuronale dans une seule figure. L’activit´e de la populationest d´ecompos´ee en peu de composantes dissoci´ees qui repr´esentent la plus grande partiede la variance des donn´ees et r´ev`elent l’accord dynamique de la population aux stimuli, d´ecisions,r´ecompenses etc. Par rapport aux approches traditionnelles, la dPCA simplifie nettement la visualisationdes donn´ees et r´ev`ele des caract´eristiques essentielles souvent masqu´ees, comme descomposantes majeur de l’activit´e neuronale ind´ependantes de la tˆache sp´ecifique, et le passagesuccessif de composantes en composantes de l’information essentielle `e la tˆache tout au long del’essai.La dPCA confirme donc la pr´esence de r´eponses neuronales fortement structur´ees. Pourtant,il reste `e d´emontrer comment de telles r´eponses peuvent ´emerger de grands r´eseau de neuronesbruyants et non-lin´eaires. Dans la seconde partie, nous nous appuyons sur l’hypoth`ese que lecode de population optimal soit robuste par rapport aux perturbations (comme le mort neuronalou l’´echec de la g´en´eration de potentiels d’action) autant qu’il soit m´etaboliquement efficace. Cecode optimal peut tre r´ealis´e dans des r´eseaux ´equilibrant finement les courants excitatoires etinhibitoires. Nous fixons cet ´equilibre comme cible d’apprentissage et d´erivons alors des r`eglesde plasticit´e synaptique biologiquement plausibles pour les connexions r´ecurrentes ainsi que feedforward,autant dans les r´eseaux avec codage par taux de d´echarge que dans ceux avec codagetemporel. De ces r`egles d’apprentissage, nous obtenons des r´eseaux r´ecurrents auto-organis´es avecune r´eponse de population structur´ee et robuste `e de fortes perturbations. Ces r´eseaux pr´esententnotamment de nombreuses propri´et´es qui sont caract´eristiques d’enregistrements physiologiquesdans les zones corticales sup´erieures, y compris l’irregularit´e des d´echarges selon la loi de Poisson.Cette variabilit´e, loin d’tre du bruit, permet en fait, grˆace `e une coop´eration optimale entreneurones, que les stimuli soient repr´esent´es le mieux possible au niveau de la population. Cesr´esultats sugg`erent que l’´equilibre entre excitation et inhibition, l’hom´eostasie, la plasticit´e enfonction du temps d’occurrence des impulsions, et les statistiques de d´echarge selon Poisson sonttous interd´ependants, et qu’il s’agit de traits caract´eristiques d’un codage optimal et non-redondantprenant en compte chacun des potentiels d’actions sans exception. / Neurons in higher cortical areas, such as the prefrontal cortex, are known to be tuned to avariety of sensory and motor variables, and are therefore said to display mixed selectivity.The diversity and complexity of the respective single-cell responses often obscures whatinformation these areas represent, or how it is represented. The first half of this thesisintroduces a novel dimensionality reduction technique, demixed principal componentanalysis (dPCA), which automatically discovers and highlights the essential features ofcomplex population activities. We reanalyse population data from four datasets comprisingdifferent animals (rats and monkeys), different higher cortical areas (prefrontal cortex,orbitofrontal cortex) and different experimental tasks (tactile and visuospatial workingmemory tasks, odour discrimination and categorization tasks). In each case, dPCA providesa concise way of data visualization that summarizes the relevant features of thepopulation response in a single figure. The population activity gets decomposed into afew demixed components that capture most of the variance in the data, and that highlightthe dynamic tuning of the population to stimuli, decisions, rewards, etc. Comparedwith traditional approaches, dPCA considerably simplifies the visualization of the data.Moreover, dPCA reveals important, but inconspicuous, features that remained unnoticedwith more conventional approaches: these include strong, condition-independent componentsof the neural activity, and the shifting of task-relevant information between differentcomponents, i.e. between firing rate subspaces.How such highly structured population responses, as exposed by dPCA, can emerge inlarge populations of noisy and non-linear single cells is still an open question. In the secondhalf of this thesis we start with the assumption that the optimal neural population code isboth robust to perturbations (like neuronal death or spike failures) and computationallyefficient. Such optimal codes can emerge in networks that exhibit a tight balance betweenexcitatory and inhibitory currents. We use this balance as a target for learning and derivebiologically plausible synaptic plasticity rules in rate- and spiking neural networks for boththe feed-forward and recurrent synaptic connections. The resulting self-organized recurrentneural networks display a highly structured population response, are robust to largedegrees of perturbations and show many signatures of physiological recordings in highercortical areas, including irregular spike-trains with Poisson-like statistics. This variability,however, is not noise: optimal cooperation within the network ensures that input stimuliare perfectly tracked, spike-by-spike, at the level of the population. This result suggeststhat excitatory/inhibitory balance, homeostasis, spike-time dependent plasticity rules andPoisson firing statistics go hand in hand and are signatures of an optimal, non-redundantpopulation code where each single spike counts.
82

Análise de componentes principais na dinâmica da volatilidade implícita e sua correlação com o ativo objeto. / Principal component analysis over the implied volatility dynamic and its correlation with underlying.

Avelar, André Gnecco 03 July 2009 (has links)
Como a volatilidade é a única variável não observada nas fórmulas padrão de apreçamento de opções, o mercado financeiro utiliza amplamente o conceito de volatilidade implícita, isto é, a volatilidade que ao ser aplicada na fórmula de apreçamento resulte no preço correto (observado) das opções negociadas. Por isso, entender como as volatilidades implícitas das diversas opções de dólar negociadas na BM&F, o objeto de nosso estudo, variam ao longo do tempo e como estas se relacionam é importante para a análise de risco de carteiras de opções de dólar/real bem como para o apreçamento de derivativos cambiais exóticos ou pouco líquidos. A proposta de nosso estudo é, portanto, verificar se as observações da literatura técnica em diversos mercados também são válidas para as opções de dólar negociadas na BM&F: que as volatilidades implícitas não são constantes e que há uma relação entre as variações das volatilidades implícitas e as variações do valor do ativo objeto. Para alcançar este objetivo, aplicaremos a análise de componentes principais em nosso estudo. Com esta metodologia, reduziremos as variáveis aleatórias que representam o processo das volatilidades implícitas em um número menor de variáveis ortogonais, facilitando a análise dos dados obtidos. / Volatility is the only unobserved variable in the standard option pricing formulas and hence implied volatility is a concept widely adopted by the financial market, meaning the volatility which would make the formula yield the options real market price. Therefore, understanding how the implied volatility of the options on dollar traded at BM&F, the subject of our study, vary over time is important for risk analysis over dollar option books and for pricing of exotic or illiquid derivatives Our works proposal is to verify if the observations made by the technical literature over several markets could also be applied to the options on dollar traded at BM&F: implied volatilities do vary over time and there is a relation between this variation and the variation of the underlying asset price. In order to fulfill these goals, we will apply principal component analysis in our study. This methodology will help us analyze the data by reducing the number of variables that represent the implied volatility process into a few orthogonal variables.
83

Barreiras e fatores críticos de sucesso relacionados à aplicação da produção mais limpa no Brasil

Vieira, Letícia Canal January 2016 (has links)
Em um contexto atual de busca pelo desenvolvimento sustentável, surge a necessidade de alteração de mentalidades e práticas. A Produção mais Limpa figura como um dos exemplos de iniciativa compatível com essa demanda. Seu objetivo é que sejam considerados previamente os efeitos negativos dos processos produtivos, fazendo com que seja reduzido o desperdício e a geração de poluentes. Mesmo estando de acordo com as aspirações atuais de busca por uma produção industrial mais sustentável, sua disseminação não ocorreu de forma satisfatória. Esta dissertação visa contribuir para a compreensão das barreiras para a aplicação da Produção mais Limpa, bem como identificar quais fatores devem estar presentes para que se atinja o sucesso na sua adoção, possibilitando a geração de uma proposta de framework. Para atingir os objetivos propostos um instrumento de pesquisa foi criado com base na literatura e entrevistas com profissionais. Com esse instrumento foi executada uma survey com profissionais que possuem envolvimento com a temática de Produção mais Limpa, atingindo-se um total de 185 respondentes. A partir dos resultados da análise de componentes principais, ficou evidenciado que os fatores mais cruciais dizem respeito à organização, estando relacionados com a visão, a cultura, o planejamento estratégico e os subsídios para a implantação da Produção mais Limpa, que constituíram na primeira e segunda componentes da análise. No caso das barreiras, destaca-se a existência de uma visão e cultura organizacional inadequadas (primeira componente), seguido da falta de apoio externo (segunda componente). Também foram encontrados indícios de que pode haver uma má compreensão do conceito de Produção mais Limpa, além de uma educação ambiental inadequada. Ao analisarem-se as medidas que podem ser tomadas para que a Produção mais Limpa tenha sua aplicação de forma mais efetiva, percebe-se que o principal é o reposicionamento do ambiente externo como um forte incentivador da aplicação da Produção mais Limpa, abandonando a posição de destaque ao serem observadas as barreiras. / In t In the present context of pursuit for sustainable development, the need to alter mentalities and practices arises. Cleaner Production is an example of initiative compatible with this demand. Cleaner Production aims at considering beforehand negative effects of the productive process, reducing wastes and pollutant generation. This concept is aligned with current aspirations of pursuing for a more sustainable industrial production, but its dissemination did not occur in a satisfactory way. This dissertation seeks to contribute for comprehension of barriers to Cleaner Production application, as well as identify critical success factors that exist to achieve success in its adoption, making possible the conception of a framework proposal. To reach the proposed objectives a research instrument was created based on literature and interviews with professionals. With this instrument a survey was performed with professionals that work with Cleaner Production; a total of 185 responses were obtained. Results of the Principal Component Analysis made evident that critical factors are related with organization aspects, such as vision, culture, strategic planning and subsides for Cleaner Production implementation, that composed the first and second components. Regarding the barriers, it was emphasized the existence of an inadequate organizational vision and culture (first component), followed by lack of external support (second component). It was also found evidences that a mistaken comprehension of the Cleaner Production concept might exist, as well as an inappropriate environmental education. Considering measures that might be taken in order to disseminate Cleaner Production more effectively, it was noticed that is important to repositioning the external environment, making it a strong support in Cleaner Production applications, leaving behind a position of highlight when barriers are observed.
84

Sistemáticas de agrupamento de países com base em indicadores de desempenho / Countries clustering systematics based on performance indexes

Mello, Paula Lunardi de January 2017 (has links)
A economia mundial passou por grandes transformações no último século, as quais incluiram períodos de crescimento sustentado seguidos por outros de estagnação, governos alternando estratégias de liberalização de mercado com políticas de protecionismo comercial e instabilidade nos mercados, dentre outros. Figurando como auxiliar na compreensão de problemas econômicos e sociais de forma sistêmica, a análise de indicadores de desempenho é capaz de gerar informações relevantes a respeito de padrões de comportamento e tendências, além de orientar políticas e estratégias para incremento de resultados econômicos e sociais. Indicadores que descrevem as principais dimensões econômicas de um país podem ser utilizados como norteadores na elaboração e monitoramento de políticas de desenvolvimento e crescimento desses países. Neste sentido, esta dissertação utiliza dados do Banco Mundial para aplicar e avaliar sistemáticas de agrupamento de países com características similares em termos dos indicadores que os descrevem. Para tanto, integra técnicas de clusterização (hierárquicas e não-hierárquicas), seleção de variáveis (por meio da técnica “leave one variable out at a time”) e redução dimensional (através da Análise de Componentes Principais) com vistas à formação de agrupamentos consistentes de países. A qualidade dos clusters gerados é avaliada pelos índices Silhouette, Calinski-Harabasz e Davies-Bouldin. Os resultados se mostraram satisfatórios quanto à representatividade dos indicadores destacados e qualidade da clusterização gerada. / The world economy faced transformations in the last century. Periods of sustained growth followed by others of stagnation, governments alternating strategies of market liberalization with policies of commercial protectionism, and instability in markets, among others. As an aid to understand economic and social problems in a systemic way, the analysis of performance indicators generates relevant information about patterns, behavior and trends, as well as guiding policies and strategies to increase results in economy and social issues. Indicators describing main economic dimensions of a country can be used guiding principles in the development and monitoring of development and growth policies of these countries. In this way, this dissertation uses data from World Bank to elaborate a system of grouping countries with similar characteristics in terms of the indicators that describe them. To do so, it integrates clustering techniques (hierarchical and non-hierarchical), selection of variables (through the "leave one variable out at a time" technique) and dimensional reduction (appling Principal Component Analysis). The generated clusters quality is evaluated by the Silhouette Index, Calinski-Harabasz and Davies-Bouldin indexes. The results were satisfactory regarding the representativity of the highlighted indicators and the generated a good clustering quality.
85

Razão 6/3 como indicador de qualidade da dieta brasileira e a relação com doenças crônicas não transmissíveis / 6/3 ratio as a quality indicator of the Brazilian diet and it relation with chronic diseases

Mainardi, Giulia Marcelino 23 October 2014 (has links)
Introdução: A carga de doenças crônicas está aumentando rapidamente em todo o mundo. A proporção de ácido graxo 6/3 é um indicador qualitativo da dieta e sua elevação tem se mostrado associada a doenças crônicas na idade adulta. Em diversos países os padrões alimentares modernos apresentam proporção elevada de ácido graxo 6/3, no Brasil esse dado é desconhecido. Objetivo: Identificar os padrões de consumo alimentar da população brasileira na faixa etária de 15 a 35 anos e investigar a associação desses padrões com fatores de risco biológicos para doenças crônicas. Métodos: Foram utilizados dados do inquérito de consumo alimentar individual (POF 7) da Pesquisa Orçamento Familiares (POF) 2008 a 2009. Para estimar os padrões alimentares utilizou-se a análise de componentes principais (ACP), com rotação varimax. Para determinar o número de componentes a serem retidos na análise, consideramos aqueles com eingenvalues 1 e, para caracterizá-los, as variáveis com loadings |0,20|. Realizou-se o teste de Kaiser-Meyer-Olkin (KMO) para indicar a adequação dos dados à ACP. As associações entre os padrões alimentares (escores fatoriais) e fatores de risco para doenças crônicas, sintetizados na razão 6/3 do consumo alimentar acima de 10:1, foram estimadas através de regressões linear e logística. Foram considerados estatisticamente significantes os valores com p<0,05. As análises foram realizadas no software STATA 12. Resultados: Na amostra de 12527 indivíduos foram identificamos 3 padrões alimentares (P). O P3 caracterizado pelo consumo de preparações mistas, pizza/sanduíches, vitaminas/iogurtes, doces, sucos diversos e refrigerantes apresentou efeito de redução na 6/3 da dieta; o P1 pró inflamatório caracterizado por carnes processadas, panificados, laticínios, óleos e gorduras apresentou efeito de aumento na razão 6/3, este padrão é mais praticado pela população de menor renda em ambos os sexos. Observou-se baixo consumo de frutas e hortaliças em todos os padrões alimentares. Supondo-se um aumento na prática dos padrões 2 e 3, haveria a diminuição da probabilidade da pratica do P1 em 5 por cento em ambos os sexos. O índice de confiança da ACP, estimado pelo coeficiente KMO foi 0,57. Conclusão: Os padrões alimentares caracterizados pelo consumo de óleos e gorduras, carnes processadas, laticínios e panificados contribuíram para o aumento da 6/3 da dieta brasileira e, por extensão, para o risco de desenvolvimento de doenças crônicas não transmissíveis. Padrões alimentares complexos e com ampla gama de alimentos consumidos se mostram mais efetivos na redução da razão 6/3 da dieta de brasileiros adultos, lembrando que esse efeito é devido ao papel de sinergia durante a digestão e absorção que os alimentos exercem no organismo, já que o consumo se dá por uma variedade de alimentos e não por alimentos isolados. As políticas públicas na área da alimentação devem levar em conta a razão 6/3 como um dos marcadores da qualidade da dieta no País. / Introduction: The burden of chronic diseases is rapidly increasing worldwide. The proportion of fatty acid 6/3 is a qualitative indicator of diet and its increase has been shown to be associated with chronic diseases in adulthood. In many countries modern dietary patterns have a high proportion of fatty acid 6/3, in Brazil this data is unknown. Objective: To identify dietary patterns of the population in the age group between 15-35 years and to investigate the association between these patterns and biological risk factors for chronic diseases. Methods: We used data from individual food consumption survey (POF 7) Pesquisa de Orçamento Familiar (POF) from 2008 to 2009. To estimate the dietary patterns we used the principal component analysis (PCA) with varimax rotation. To determine the number of components to be retained in the analysis we consider those with eingenvalues 1 and to characterize them variables with loadings | 0.20 |. We used Kaiser-Meyer-Olkin (KMO) test to indicate the adequacy of the data to PCA. Associations between dietary patterns (factor scores) and risk factors for chronic diseases, characterized by 6/3 ratio > 10:1 of food consumption were estimated by linear and logistic regressions. Values with p <0.05 were considered statistically significant. Analyses were performed in STATA 12 software. Results: In the sample of 12527 individuals we identified 3 dietary patterns (P). The P3 characterized by the use of mixed preparations, pizza / sandwiches, vitamins / yogurts, pastries, juices and soft drinks, was effective in reducing the 6/3 ratio; P1 \"pro inflammatory\" characterized by processed meats, bakery, dairy, oils and fats had an effect of increasing the 6/3ratio, this pattern is more practiced by the population of lower income in both sexes. We found a low consumption of fruits and 8 vegetables in all dietary patterns. Assuming an increase in the practice of the patterns 2 and 3 would be decreased the probability of the P1 practice by 5 per cent in both sexes. The PCA confidence index, estimated by KMO coefficient was 0.57. Conclusion: Dietary patterns characterized by the consumption of oils and fats, processed meats, dairy and bakery products contributed to the increase in 6/3 in the Brazilian diet and, by extension, the risk of developing chronic diseases. Complex and wide range of foods consumed in dietary patterns are more effective in reducing the ratio 6/3 diet of Brazilian adults, this effect is due to the role of synergy during digestion and absorption that food has on the body, since the consumption takes place by a variety of food and not by food consumption isolated. Nutrition public policies must take into account the 6/3 ratio as one of the markers of diet quality in the countrys food consumption.
86

Adaptive supervisory control scheme for voltage controlled demand response in power systems

Abraham, Etimbuk January 2018 (has links)
Radical changes to present day power systems will lead to power systems with a significant penetration of renewable energy sources and smartness, expressed in an extensive utilization of novel sensors and cyber secure Information and Communication Technology. Although these renewable energy sources prove to contribute to the reduction of CO2 emissions into the environment, its high penetration affects power system dynamic performance as a result of reduced power system inertia as well as less flexibility with regards to dispatching generation to balance future demand. These pose a threat both to the security and stability of future power systems. It is therefore very important to develop new methods through which power system security and stability can be maintained. This research investigated the development of methods through which the contributions of on-load tap changing transformers/transformer clusters could be assessed with the intent of developing real time adaptive voltage controlled demand response schemes for power systems. The development of such a scheme enables more active system components to be involved in the provision of frequency control as an ancillary service and deploys a new frequency control service with low infrastructural investment, bearing in mind that OLTC transformers are already very prevalent in power systems. In this thesis, a novel online adaptive supervisory controller for ensuring optimal dispatch of voltage-controlled demand response resources is developed. This novel controller is designed using the assessment results of OLTC transformer impacts on steady-state frequency and was tested for a variety of scenarios. To achieve the effective performance of the adaptive supervisory controller, the extensive use of statistical techniques for assessing OLTC transformer contributions to voltage controlled demand response is presented. This thesis also includes the use of unsupervised machine learning techniques for power system partitioning and the further use of statistical methods for assessing the contributions of OLTC transformer aggregates.
87

A Comparison of Data Transformations in Image Denoising

Michael, Simon January 2018 (has links)
The study of signal processing has wide applications, such as in hi-fi audio, television, voice recognition and many other areas. Signals are rarely observed without noise, which obstruct our analysis of signals. Hence, it is of great interest to study the detection, approximation and removal of noise.  In this thesis we compare two methods for image denoising. The methods are each based on a data transformation. Specifically, Fourier Transform and Singular Value Decomposition are utilized in respective methods and compared on grayscale images. The comparison is based on the visual quality of the resulting image, the maximum peak signal-to-noise ratios attainable for the respective methods and their computational time. We find that the methods are fairly equal in visual quality. However, the method based on the Fourier transform scores higher in peak signal-to-noise ratio and demands considerably less computational time.
88

A Vision-Based Approach For Unsupervised Modeling Of Signs Embedded In Continuous Sentences

Nayak, Sunita 07 July 2005 (has links)
The common practice in sign language recognition is to first construct individual sign models, in terms of discrete state transitions, mostly represented using Hidden Markov Models, from manually isolated sign samples and then to use them to recognize signs in continuous sentences. In this thesis we use a continuous state space model, where the states are based on purely image-based features, without the use of special gloves. We also present an unsupervised approach to both extract and learn models for continuous basic units of signs, which we term as signemes, from continuous sentences. Given a set of sentences with a common sign, we can automatically learn the model for part of the sign,or signeme, that is least affected by coarticulation effects. We tested our idea using the publicly available Boston SignStreamDataset by building signeme models of 18 signs. We test the quality of the models by considering how well we can localize the sign in a new sentence. We also present the concept of smooth continuous curve based models formed using functional splines and curve registration. We illustrate this idea using 16 signs.
89

Modelling Distance Functions Induced by Face Recognition Algorithms

Chaudhari, Soumee 09 November 2004 (has links)
Face recognition algorithms has in the past few years become a very active area of research in the fields of computer vision, image processing, and cognitive psychology. This has spawned various algorithms of different complexities. The concept of principal component analysis(PCA) is a popular mode of face recognition algorithm and has often been used to benchmark other face recognition algorithms for identification and verification scenarios. However in this thesis, we try to analyze different face recognition algorithms at a deeper level. The objective is to model the distances output by any face recognition algorithm as a function of the input images. We achieve this by creating an affine eigen space from the PCA space such that it can approximate the results of the face recognition algorithm under consideration as closely as possible. Holistic template matching algorithms like the Linear Discriminant Analysis algorithm( LDA), the Bayesian Intrapersonal/Extrapersonal classifier(BIC), as well as local feature based algorithms like the Elastic Bunch Graph Matching algorithm(EBGM) and a commercial face recognition algorithm are selected for our experiments. We experiment on two different data sets, the FERET data set and the Notre Dame data set. The FERET data set consists of images of subjects with variation in both time and expression. The Notre Dame data set consists of images of subjects with variation in time. We train our affine approximation algorithm on 25 subjects and test with 300 subjects from the FERET data set and 415 subjects from the Notre Dame data set. We also analyze the effect of different distance metrics used by the face recognition algorithm on the accuracy of the approximation. We study the quality of the approximation in the context of recognition for the identification and verification scenarios, characterized by cumulative match score curves (CMC) and receiver operator curves (ROC), respectively. Our studies indicate that both the holistic template matching algorithms as well as feature based algorithms can be well approximated. We also find the affine approximation training can be generalized across covariates. For the data with time variation, we find that the rank order of approximation performance is BIC, LDA, EBGM, and commercial. For the data with expression variation, the rank order is LDA, BIC, commercial, and EBGM. Experiments to approximate PCA with distance measures other than Euclidean also performed very well. PCA+Euclidean distance is best approximated followed by PCA+MahL1, PCA+MahCosine, and PCA+Covariance.
90

An Indepth Analysis of Face Recognition Algorithms using Affine Approximations

Reguna, Lakshmi 19 May 2003 (has links)
In order to foster the maturity of face recognition analysis as a science, a well implemented baseline algorithm and good performance metrics are highly essential to benchmark progress. In the past, face recognition algorithms based on Principal Components Analysis(PCA) have often been used as a baseline algorithm. The objective of this thesis is to develop a strategy to estimate the best affine transformation, which when applied to the eigen space of the PCA face recognition algorithm can approximate the results of any given face recognition algorithm. The affine approximation strategy outputs an optimal affine transform that approximates the similarity matrix of the distances between a given set of faces generated by any given face recognition algorithm. The affine approximation strategy would help in comparing how close a face recognition algorithm is to the PCA based face recognition algorithm. This thesis work shows how the affine approximation algorithm can be used as a valuable tool to evaluate face recognition algorithms at a deep level. Two test algorithms were choosen to demonstrate the usefulness of the affine approximation strategy. They are the Linear Discriminant Analysis(LDA) based face recognition algorithm and the Bayesian interpersonal and intrapersonal classifier based face recognition algorithm. Our studies indicate that both the algorithms can be approximated well. These conclusions were arrived based on the results produced by analyzing the raw similarity scores and by studying the identification and verification performance of the algorithms. Two training scenarios were considered, one in which both the face recognition and the affine approximation algorithm were trained on the same data set and in the other, different data sets were used to train both the algorithms. Gross error measures like the average RMS error and Stress-1 error were used to directly compare the raw similarity scores. The histogram of the difference between the similarity matrixes also clearly showed that the error spread is small for the affine approximation algorithm. The performance of the algorithms in the identification and the verification scenario were characterized using traditional CMS and ROC curves. The McNemar's test showed that the difference between the CMS and the ROC curves generated by the test face recognition algorithms and the affine approximation strategy is not statistically significant. The results were statistically insignificant at rank 1 for the first training scenario but for the second training scenario they became insignificant only at higher ranks. This difference in performance can be attributed to the different training sets used in the second training scenario.

Page generated in 0.0875 seconds