• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 103
  • 41
  • 8
  • 7
  • 4
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 204
  • 204
  • 204
  • 34
  • 32
  • 32
  • 23
  • 18
  • 17
  • 17
  • 17
  • 16
  • 16
  • 14
  • 14
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
131

Using Ears for Human Identification

Saleh, Mohamed Ibrahim 18 July 2007 (has links)
Biometrics includes the study of automatic methods for distinguishing human beings based on physical or behavioral traits. The problem of finding good biometric features and recognition methods has been researched extensively in recent years. Our research considers the use of ears as a biometric for human recognition. Researchers have not considered this biometric as much as others, which include fingerprints, irises, and faces. This thesis presents a novel approach to recognize individuals based on their outer ear images through spatial segmentation. This approach to recognizing is also good for dealing with occlusions. The study will present several feature extraction techniques based on spatial segmentation of the ear image. The study will also present a method for classifier fusion. Principal components analysis (PCA) is used in this research for feature extraction and dimensionality reduction. For classification, nearest neighbor classifiers are used. The research also investigates the use of ear images as a supplement to face images in a multimodal biometric system. Our base eigen-ear experiment results in an 84% rank one recognition rate, and the segmentation method yielded improvements up to 94%. Face recognition by itself, using the same approach, gave a 63% rank one recognition rate, but when complimented with ear images in a multimodal system improved to 94% rank one recognition rate. / Master of Science
132

The implementation of noise addition partial least squares

Moller, Jurgen Johann 03 1900 (has links)
Thesis (MComm (Statistics and Actuarial Science))--University of Stellenbosch, 2009. / When determining the chemical composition of a specimen, traditional laboratory techniques are often both expensive and time consuming. It is therefore preferable to employ more cost effective spectroscopic techniques such as near infrared (NIR). Traditionally, the calibration problem has been solved by means of multiple linear regression to specify the model between X and Y. Traditional regression techniques, however, quickly fail when using spectroscopic data, as the number of wavelengths can easily be several hundred, often exceeding the number of chemical samples. This scenario, together with the high level of collinearity between wavelengths, will necessarily lead to singularity problems when calculating the regression coefficients. Ways of dealing with the collinearity problem include principal component regression (PCR), ridge regression (RR) and PLS regression. Both PCR and RR require a significant amount of computation when the number of variables is large. PLS overcomes the collinearity problem in a similar way as PCR, by modelling both the chemical and spectral data as functions of common latent variables. The quality of the employed reference method greatly impacts the coefficients of the regression model and therefore, the quality of its predictions. With both X and Y subject to random error, the quality the predictions of Y will be reduced with an increase in the level of noise. Previously conducted research focussed mainly on the effects of noise in X. This paper focuses on a method proposed by Dardenne and Fernández Pierna, called Noise Addition Partial Least Squares (NAPLS) that attempts to deal with the problem of poor reference values. Some aspects of the theory behind PCR, PLS and model selection is discussed. This is then followed by a discussion of the NAPLS algorithm. Both PLS and NAPLS are implemented on various datasets that arise in practice, in order to determine cases where NAPLS will be beneficial over conventional PLS. For each dataset, specific attention is given to the analysis of outliers, influential values and the linearity between X and Y, using graphical techniques. Lastly, the performance of the NAPLS algorithm is evaluated for various
133

Apport de la chimiométrie et des plans d’expériences pour l’évaluation de la qualité de l’huile d’olive au cours de différents processus de vieillissement / Contribution of chemometrics and experimental designs for evaluating the quality of olive oil during different aging process

Plard, Jérôme 17 January 2014 (has links)
L'huile d'olive est un élément important de l'alimentation méditerranéenne. Cependant lorsqu'une huile vieillit, elle se dégrade et perd ses propriétés. Il est donc important de connaitre l'évolution de la composition de l'huile en fonction de ses conditions de stockage et de fabrication. Ce suivi a été effectué sur deux huiles de fabrication différente, une huile fruité vert et une huile fruité noir, obtenue à partir d'olive à maturité que l'on a laissé fermenter quelques jours. De manière à obtenir rapidement des vieillissements poussés, ces deux huiles ont été vieillies artificiellement, par procédé thermique , et par procédé photochimique. Ces vieillissements ont été réalisés sur des volumes différents de manière à déterminer l'impact du rapport surface/masse. En parallèle, des échantillons de chacune des deux huiles ont été conservés durant 24 mois dans des conditions de stockage différentes déterminées à l'aide d'un plan d'expériences. Les paramètres influençant le plus la conservation de l'huile d'olive sont l'apport en oxygène, la luminosité et la température. Ces influences ont été déterminées à partir du suivi des principaux paramètres de qualité La réponse des plans a permis de mettre en évidence des interactions entre ces différents paramètres. L'analyse de la composition de l'huile ainsi que de tous les critères de qualité demande beaucoup de temps et consomme une grande quantité de solvant. Afin de pallier à ces désagréments, les résultats ont également été utilisés pour construire des modèles chimiométriques permettant de déterminer ces grandeurs à partir des spectres proche et moyen infrarouge des échantillons. / Olive oil is an important component of the Mediterranean diet. When oil ages, it deteriorates and loses its properties. It is therefore important to know the evolution of the oil composition according to the conditions of storage and manufacturing. This monitoring was carried out on two different oils manufacturing, green fruity oil obtained from olives harvested before maturity, and black fruit oil obtained from olives harvest at maturity and fermented for few days under controlled conditions. To obtain quickly pushed aging, these two oils were artificially aged by heat process (heated to 180 °C under supply of O2), and photochemical process (under an UV lamp and under supply of O2). These aging were performed on different volumes to determine the impact of surface/weight ratio. In parallel, samples of both oils were stored for 24 months under different storage conditions determined using an experimental design. The parameters affecting the most the conservation of olive oil are oxygen, light and temperature. These influences were determined from the monitoring of key quality criteria. Response of experimental design helped to highlight the interactions between these different parameters. The analysis of the oil composition as well as all the quality criteria requires a large amount of solvents and a lot of time consumer. To overcome these inconveniences, chemometric models has been built to determine these criteria from the near and mid-infrared spectra of samples. Natural aging is very little advanced in comparison to accelerated aging, so predictive models were established from the results of natural aging and accelerated separately.
134

Processamento digital de imagens para identificação da sigatoka negra em bananais utilizando análise de componentes principais e redes neurais artificiais /

Silva, Silvia Helena Modenese Gorla da, 1974- January 2008 (has links)
Orientador: Carlos Roberto Padovani / Banca: Wilson da Silva Moraes / Banca: José Carlos Martinez / Banca: Marie Oshiwa / Banca: Sandra Fiorelli de Almeida Penteado / Resumo: O presente trabalho investigou a utilização do processamento digital de imagens conjuntamente com a análise de componentes principais e redes neurais artificiais como ferramentas de apoio para uma melhor identificação dos estádios iniciais do desenvolvimento da Sigatoka Negra, em nível de campo, para que medidas de controle sejam adotadas mais rapidamente e, assim, reduzir danos e prejuízos causados pela doença na bananicultura. Foram coletadas imagens digitais de folhas de bananeiras infectadas com a Sigatoka Negra nos estádios 1, 2 e 3, sadia e com fitotoxidez por óleo. A seguir, extraíram-se histogramas dos componentes de imagens no sistema RGB (Red, Green eBlue) para 256 intensidades de cinza das amostras, totalizando 768 variáveis para cada amostra. Com a aplicação de uma técnica de seleção de atributos, a análise de componentes principais, conseguiu-se reduzir as variáveis de entrada de 768 para 11 variáveis canônicas, representado uma redução de 98,6%. Em seguida, considerando-se as variáveis canônicas, realizou-se a fase de classificação com o uso de redes neurais artificiais. De maneira geral, as maiores freqüências de acertos do modelo foram para as classes que mais interessam ao monitoramento da enfermidade, mostrando a robustez do classificador gerado, evidenciada pela baixa probabilidade de classificação incorreta (19%). / Abstract: This study investigated the application, specifically the digital processing of images, with main components analysis and artificial neural networks as tools to support for a better identification of the primaries stages of the Black Sigatoka, in field level, so that control measures are adopted more quickly and consequently it reduces injuries and damages caused by the disease in the banana crops. It were collected digital images of banana leaves infected with Black Sigatoka in stages 1, 2, and 3, healthy and with oil fitotoxity. To proceed, histograms of the components of images were extracted in the system RGB (Red, Green and Blue) for 256 intensities shades of gray of the samples, totaling 768 variables for each sample. With the application of a technique of selection of attributes, the main components analysis, it was possible to reduce the variables of entrance of 768 for 11 canonical variables, represented a reduction of 98,6%. Therefore, being considered the canonical variables, it was accomplished the classification phase with the use of artificial neural networks. In a general way, the largest frequencies of successes of the model went to the classes that more they interest to the control of the diseases, showing the robustness of the generated classifier, evidenced by the low probability of wrong classification (19%). / Doutor
135

Contribution à la reconnaissance non-intrusive d'activités humaines / Contribution to the non-intrusive gratitude of human activities

Trabelsi, Dorra 25 June 2013 (has links)
La reconnaissance d’activités humaines est un sujet de recherche d’actualité comme en témoignent les nombreux travaux de recherche sur le sujet. Dans ce cadre, la reconnaissance des activités physiques humaines est un domaine émergent avec de nombreuses retombées attendues dans la gestion de l’état de santé des personnes et de certaines maladies, les systèmes de rééducation, etc.Cette thèse vise la proposition d’une approche pour la reconnaissance automatique et non-intrusive d’activités physiques quotidiennes, à travers des capteurs inertiels de type accéléromètres, placés au niveau de certains points clés du corps humain. Les approches de reconnaissance d’activités physiques étudiées dans cette thèse, sont catégorisées en deux parties : la première traite des approches supervisées et la seconde étudie les approches non-supervisées. L’accent est mis plus particulièrement sur les approches non-supervisées ne nécessitant aucune labellisation des données. Ainsi, nous proposons une approche probabiliste pour la modélisation des séries temporelles associées aux données accélérométriques, basée sur un modèle de régression dynamique régi par une chaine de Markov cachée. En considérant les séquences d’accélérations issues de plusieurs capteurs comme des séries temporelles multidimensionnelles, la reconnaissance d’activités humaines se ramène à un problème de segmentation jointe de séries temporelles multidimensionnelles où chaque segment est associé à une activité. L’approche proposée prend en compte l’aspect séquentiel et l’évolution temporelle des données. Les résultats obtenus montrent clairement la supériorité de l’approche proposée par rapport aux autres approches en termes de précision de classification aussi bien des activités statiques et dynamiques, que des transitions entre activités. / Human activity recognition is currently a challengeable research topic as it can be witnessed by the extensive research works that has been conducted recently on this subject. In this context, recognition of physical human activities is an emerging domain with expected impacts in the monitoring of some pathologies and people health status, rehabilitation procedures, etc. In this thesis, we propose a new approach for the automatic recognition of human activity from raw acceleration data measured using inertial wearable sensors placed at key points of the human body. Approaches studied in this thesis are categorized into two parts : the first one deals with supervised-based approaches while the second one treats the unsupervised-based ones. The proposed unsupervised approach is based upon joint segmentation of multidimensional time series using a Hidden Markov Model (HMM) in a multiple regression context where each segment is associated with an activity. The model is learned in an unsupervised framework where no activity labels are needed. The proposed approach takes into account the sequential appearance and temporal evolution of data. The results clearly show the satisfactory results of the proposed approach with respect to other approaches in terms of classification accuracy for static, dynamic and transitional human activities
136

Aplicações de técnicas de análise multivariada em experimentos agropecuários usando o software R / Application of multivariate analysis in agricultural experiments using R software

Sartorio, Simone Daniela 08 July 2008 (has links)
O uso das técnicas de análise multivariada está reservado aos grandes centros de pesquisa, µas grandes empresas e ao ambiente acad^emico. Essas técnicas s~ao muito interessantes porque utilizam simultaneamente todas as variáveis respostas na interpretação teórica do conjunto de dados, levando em conta as correlações existentes entre elas. Uma das principais barreiras para a utilização dessas técnicas é o seu desconhecimento pelos pesquisadores interessados na pesquisa quantitativa. A outra dificuldade é que a grande maioria de softwares que permitem esse tipo de análise (SAS, MINITAB, BMDP, STATISTICA, S-PLUS, SYSTAT, etc.) não são de domínio público. A disseminação do uso das técnicas multivariadas pode melhorar a qualidade das pesquisas, proporcionar uma economia relativa de tempo e de custo, e facilitar a interpretação das estruturas dos dados, diminuindo a perda de informação. Neste trabalho, foram confirmadas algumas vantagens das técnicas multivariadas sobre as univariadas na análise de dados de expe- rimentos agropecuários. As análises foram realizadas com o auxílio do software R, um software aberto, \"amigável\" e gratuito, com inúmeros recursos disponíveis. / The use of the techniques of multivariate analysis is restricted to large centers of research, the higher companies and the academic environment. These techniques are very inte- resting because of the use of all answers variables simultaneously in theoretical interpretation of the data set, considering the correlations between them. One of the main obstacle to the usage of these techniques is that researchers interested in the quantitative research do not know them. The other di±culty is that most of the software that allow this type of analysis (SAS, MINITAB, BMDP, STATISTICA, S-PLUS, SYSTAT etc.) are not in public domain. Publishing the use of Multivariate techniques can improve the quality of the research, decrease the time spend and the cost, and make easy the interpretation of the structures of the data without cause damage of the information. In this report, were con¯rmed some advantages of the multivariate techniques in a univariate analysis for data of agricultural experiments. The analysis were taken with R software, a open software, \"friendly\" and free, with many statistical resources available.
137

Avaliação da qualidade de águas pluviais armazenadas e estudos de tratabilidade empregando filtro de pressão com diferentes meios filtrantes visando ao aproveitamento para fins não potáveis /

Nakada, Liane Yuri Kondo. January 2012 (has links)
Orientador: Rodrigo Braga Moruzzi / Banca: Alexandre Silveira / Banca: Samuel Conceição de Oliveira / Resumo: O presente trabalho foi desenvolvido em três etapas, cronologicamente: i) estudo da qualidade de águas pluviais coletadas por telhada por telhado cerâmico e armazenadas; ii) modificações em estado experimental em escala real de coleta e tratamento simplificado de águas pluviais, para possibilitar o estudo de três diferentes meios filtrantes; e iii) coletas, estudos de tratabilidade em escala de bancada e estudos do tratamento em escala real. Os resultados indicam que: i) as águas pluviais coletadas após escoamento sobre telhados cerâmicos necessitam de tratamento para assegurar o uso, mesmo que para atividades não potáveis, conforme recomendações da NBR 15527 (ABNT, 2007); cada precipitação possui aspectos qualitativos particulares, significativamente dependentes do período de estiagem antecedente à chuva, de modo que demandam ensaios de tratabilidade para cada evento: ii) a implantação de sistemas de coleta e tratamento/aproveitamento de águas pluviais em novos empreendimentos imobiliários é mais conveniente do que a inclusão desse sistemas como adaptação de sistemas de água já existentes; iii) para as águas pluviais estudadas, em nenhuma configuração de tratamento foi integralmente atendido o padrão de qualidade recomendado pela NBR 15527 (ABNT, 2007), entretanto, a estratégia de tratamento simplificado investigado pode produzir água com a qualidade recomendada / Abstract: The present work was developed in three steps, in the following chronological order: i) study of the quality o stored roof-harvested rainwater; ii) modifications in the full-scale experimental plant for rainwater harvesting and simplified treating, to enable the study of three different media filter, iii) harvest of rainwater, treatability studies in bench scale and study of the full-scale treatment. The results indicate that: i) the ceramic roof-harvested rainwater require treatment to safeguard uses, even for non-drinkable purposes, according to recommendations of the current norm NBR 15527 (ABNT, 2007); each rain event presents specific qualitative aspects, significantly dependent on the dry days before the rain, hence, each harvested rainwater demands treatability studies; ii) the implementation of rainwater harvesting and treatment systems in new buildings is more convenient than the addition of those systems as adaptations of water systems from previolusly existing buildings. iii) for the studied rainwater, nome of the treatment configurations completely met the quality recommended by norm, through, the treatment strategy can produce water which meet the quality recommendations / Mestre
138

Padronização de óleos de Copaifera multijuga hayne por meio de técnicas cromatográficas

Barbosa, Paula Cristina Souza 25 June 2012 (has links)
Made available in DSpace on 2015-04-22T22:02:09Z (GMT). No. of bitstreams: 1 Paula C S Barbosa.pdf: 2096071 bytes, checksum: 806dcd37f1a3dcf13e97a1623194db6e (MD5) Previous issue date: 2012-06-25 / Coordenação de Aperfeiçoamento de Pessoal de Nível Superior / As árvores do gênero Copaifera (Leguminosae), conhecidas popularmente como copaibeiras, exsudam um óleo-resina extensamente utilizado na medicina popular e por indústrias farmacêuticas e de cosméticos, devido às suas atividades cicatrizante e anti-inflamatória. Quimicamente, esses óleos se caracterizam pela presença de hidrocarbonetos sesquiterpênicos, sesquiterpenos oxigenados e ácidos diterpênicos. No entanto, a composição química desses óleos-resina é variável e ainda não se tem conhecimento dos fatores que as determinam, embora vários fatores bióticos e abióticos sejam considerados fontes dessa variação. Essa variação dificulta a padronização da composição química desses óleos, comprometendo seu controle de qualidade e consequentemente a qualidade dos produtos a que darão origem, fato que tem causado um grande entrave à sua maior aplicação e comercialização. Essa variabilidade em sua composição química já é bastante conhecida e relatada na literatura, mas a maioria dos estudos realizados têm se restringido a caracterizar quimicamente o óleo-resina e poucos tem se preocupado em estudar as causas dessas variações. O objetivo deste trabalho foi padronizar a composição química dos óleos de copaíba por meio de técnicas de cromatografia em fase gasosa acoplada à detectores de ionização de chama (CG-DIC) e espectrometria de massas (CG-EM) e cromatografia em fase gasosa bidimensional abrangente (CGXCG); analisar estatisticamente a influência de fatores abióticos como sazonalidade, tipo de solo e diâmetro à altura do peito (DAP), além da infestação por cupins, sobre a composição química desses óleos. Além disso, foram comparados 5 métodos de esterificação dos ácidos diterpênicos presentes nos óleos de copaíba envolvendo catálise ácida, que utilizam BF3/MeOH, H2SO4/MeOH e HCl/MeOH, levando-se em consideração suas eficiências e frequências analíticas, além do consumo e toxicidade dos reagentes utilizados, relação custo benefício e, principalmente, a possibilidade de alteração/degradação da estrutura dos constituintes quando aplicados em óleos de copaíba. Para isso foram obtidos óleos de copaíba de 3 coletas: em novembro de 2004 e novembro de 2005 (épocas consideradas secas) e em maio de 2005 (época considerada chuvosa). No total, 43 amostras de óleo-resina de copaíba foram coletadas na Reserva Ducke (Manaus-AM), de 33 espécimes diferentes, que possuíam diferentes DAP s e se encontravam em diferentes tipos de solo. As análises por CG-DIC e CG-EM permitiram a identificação de 35 constituintes: sendo 22 hidrocarbonetos sesquiterpênicos, 9 sesquiterpenos oxigenados e 4 ácidos diterpênicos. Enquanto a análise por CGxCG permitiu a identificação de outros 13 sesquiterpenos, além de 7 monoterpenos, inéditos em óleos-resina de copaíba. O β-cariofileno e seu óxido foram os constituintes majoritários em 29 e 11 amostras, respectivamente. As análises hierárquica por agrupamento (HCA) e de componentes principais (PCA) evidenciaram a existência de dois grupos distintos com diferentes perfis cromatográficos, em que foi comprovada apenas a influência do tipo de solo, sobre a composição química desses óleos. Outros fatores analisados como sazonalidade, DAP e infestação por cupins, não tiveram influência sobre a composição química dos óleos-resina de copaíba. Quanto aos métodos de esterificação, as análises das 5 metodologias testadas, apesar de terem sido reprodutíveis, não se mostraram eficientes, ao passo que não permitiram a identificação dos constituintes formados e levaram à formação de artefatos.
139

Automatic Person Verification Using Speech and Face Information

Sanderson, Conrad, conradsand@ieee.org January 2003 (has links)
Identity verification systems are an important part of our every day life. A typical example is the Automatic Teller Machine (ATM) which employs a simple identity verification scheme: the user is asked to enter their secret password after inserting their ATM card; if the password matches the one prescribed to the card, the user is allowed access to their bank account. This scheme suffers from a major drawback: only the validity of the combination of a certain possession (the ATM card) and certain knowledge (the password) is verified. The ATM card can be lost or stolen, and the password can be compromised. Thus new verification methods have emerged, where the password has either been replaced by, or used in addition to, biometrics such as the person’s speech, face image or fingerprints. Apart from the ATM example described above, biometrics can be applied to other areas, such as telephone & internet based banking, airline reservations & check-in, as well as forensic work and law enforcement applications. Biometric systems based on face images and/or speech signals have been shown to be quite effective. However, their performance easily degrades in the presence of a mismatch between training and testing conditions. For speech based systems this is usually in the form of channel distortion and/or ambient noise; for face based systems it can be in the form of a change in the illumination direction. A system which uses more than one biometric at the same time is known as a multi-modal verification system; it is often comprised of several modality experts and a decision stage. Since a multi-modal system uses complimentary discriminative information, lower error rates can be achieved; moreover, such a system can also be more robust, since the contribution of the modality affected by environmental conditions can be decreased. This thesis makes several contributions aimed at increasing the robustness of single- and multi-modal verification systems. Some of the major contributions are listed below. The robustness of a speech based system to ambient noise is increased by using Maximum Auto-Correlation Value (MACV) features, which utilize information from the source part of the speech signal. A new facial feature extraction technique is proposed (termed DCT-mod2), which utilizes polynomial coefficients derived from 2D Discrete Cosine Transform (DCT) coefficients of spatially neighbouring blocks. The DCT-mod2 features are shown to be robust to an illumination direction change as well as being over 80 times quicker to compute than 2D Gabor wavelet derived features. The fragility of Principal Component Analysis (PCA) derived features to an illumination direction change is solved by introducing a pre-processing step utilizing the DCT-mod2 feature extraction. We show that the enhanced PCA technique retains all the positive aspects of traditional PCA (that is, robustness to compression artefacts and white Gaussian noise) while also being robust to the illumination direction change. Several new methods, for use in fusion of speech and face information under noisy conditions, are proposed; these include a weight adjustment procedure, which explicitly measures the quality of the speech signal, and a decision stage comprised of a structurally noise resistant piece-wise linear classifier, which attempts to minimize the effects of noisy conditions via structural constraints on the decision boundary.
140

A Systems Approach to Identify Indicators for Integrated Coastal Zone Management

Sanò, Marcello 09 June 2009 (has links)
El objetivo de la tesis es establecer un marco metodológico para la identificación de indicadores GIZC orientados a problemas y temas de interés, para contextos geográficos específicos. La tesis parte de la idea de que los sistemas de indicadores, utilizados para medir el estado de la costa y la implementación de proyectos de Gestión Integrada de las Zonas Costeras (GIZC), deben orientarse a problemas concretos de la zona de estudio y que su validez debe ser comprobada no sólo por la opinión de los expertos, sino también por la percepción de los usuarios y por el análisis estadístico cuantitativo. / The problem addressed by this thesis is the identification of site-specific and problem-oriented sets of indicators, to be used to determine baseline conditions and to monitor the effect of ICZM initiatives.The approach followed integrates contributions from coastal experts and stakeholders, systems theory, and the use of multivariate analysis techniques in order to provide a cost-effective set of indicators, oriented to site-specific problems, with a broad system perspective.A systems approach, based on systems thinking theory and practice, is developed and tested in this thesis to design models of coastal systems, through the identification of the system's components and relations, using the contribution of experts and stakeholders.Quantitative analysis of the system is then carried out, assessing the contribution of stakeholders and using multivariate statistics (principal components analysis), in order to understand the structure of the system, including relationships between variables.The simplification of the system (reduction of the number of variables) is one of the main outcomes, both in the participatory system's design and in the quantitative multivariate analysis, aiming at a cost-effective set of key variables to be used as indicators for coastal management.

Page generated in 0.1202 seconds