• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 305
  • 139
  • 34
  • 31
  • 23
  • 19
  • 16
  • 16
  • 14
  • 12
  • 7
  • 5
  • 4
  • 3
  • 2
  • Tagged with
  • 743
  • 743
  • 743
  • 141
  • 118
  • 112
  • 102
  • 86
  • 68
  • 65
  • 59
  • 57
  • 55
  • 54
  • 52
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
651

[en] INTELLIGENT WELL TRANSIENT TEMPERATURE SIGNAL RECONSTRUCTION / [pt] RECONSTRUÇÃO DE SINAIS TRANSIENTES DE TEMPERATURA EM POÇOS INTELIGENTES

MANOEL FELICIANO DA SILVA JUNIOR 10 November 2021 (has links)
[pt] A tecnologia de poços inteligentes já possui muitos anos de experiência de campo. Inúmeras publicações tem descrito como o controle de fluxo remoto e os sistemas de monitoração podem diminuir o número de intervenções, o número de poços e aumentar a eficiência do gerenciamento de reservatórios. Apesar da maturidade dos equipamentos de completação o conceito de poço inteligente integrado como um elemento chave do Digital Oil Field ainda não está completmente desenvolvido. Sistemas permanentes de monitoração nesse contexto tem um papel fundamental como fonte da informação a respeito do sistema de produção real visando calibração de modelos e minimização de incerteza. Entretanto, cada sensor adicional representa aumento de complexidade e de risco operacional. Um entendimento fundamentado do que realmente é necessário, dos tipos de sensores aplicáveis e quais técnicas de análises estão disponíveis para extrair as informações necessárias são pontos chave para o sucesso do projeto de um poço inteligente. Este trabalho propõe uma nova forma de tratar os dados em tempo real de poços inteligentes através da centralização do pré-processamento dos dados. Um modelo poço inteligente numérico para temperatura em regime transiente foi desenvolvido, testado e validado com a intenção de gerar dados sintéticos. A aplicação foi escolhida sem perda de generalidade como um exemplo representativo para validação dos algorítmos de limpeza e extração de características desenvolvidos. Os resultados mostraram aumento da eficiência quando comparados com o estado da arte e um potencial para capturar a influência mútua entre os processos de produção. / [en] Intelligent Well (IW) technology has built-up several years production experience. Numerous publications have described how remote flow control and monitoring capabilities can lead to fewer interventions, a reduced well count and improved reservoir management. Despite the maturity of IW equipment, the concept of the integrated IW as a key element in the Digital Oil Field still not fully developed. Permanent monitoring systems in this framework play an important role as source of the necessary information about actual production system aiming model calibration and uncertainty minimization. However, each extra permanently installed sensor increases the well s installation complexity and operational risk. A well-founded understanding of what data is actually needed and what analysis techniques are available to extract the required information are key factors for the success of the IW project. This work proposes a new framework to real-time data analysis through centralizing pre-processing. A numeric IW transient temperature model is developed, tested and validated to generate synthetic data. It was chosen without loss off generality as a representative application to test and validate the cleansing and feature extraction algorithms developed. The results achieved are compared with the state of the art ones showing advantages regarding efficiency and potential to capture mutual influence among processes.
652

Applied Machine Learning Predicts the Postmortem Interval from the Metabolomic Fingerprint

Arpe, Jenny January 2024 (has links)
In forensic autopsies, accurately estimating the postmortem interval (PMI) is crucial. Traditional methods, relying on physical parameters and police data, often lack precision, particularly after approximately two days have passed since the person's death. New methods are increasingly focusing on analyzing postmortem metabolomics in biological systems, acting as a 'fingerprint' of ongoing processes influenced by internal and external molecules. By carefully analyzing these metabolomic profiles, which span a diverse range of information from events preceding death to postmortem changes, there is potential to provide more accurate estimates of the PMI. The limitation of available real human data has hindered comprehensive investigation until recently. Large-scale metabolomic data collected by the National Board of Forensic Medicine (RMV, Rättsmedicinalverket) presents a unique opportunity for predictive analysis in forensic science, enabling innovative approaches for improving  PMI estimation. However, the metabolomic data appears to be large, complex, and potentially nonlinear, making it difficult to interpret. This underscores the importance of effectively employing machine learning algorithms to manage metabolomic data for the purpose of PMI predictions, the primary focus of this project.  In this study, a dataset consisting of 4,866 human samples and 2,304 metabolites from the RMV was utilized to train a model capable of predicting the PMI. Random Forest (RF) and Artificial Neural Network (ANN) models were then employed for PMI prediction. Furthermore, feature selection and incorporating sex and age into the model were explored to improve the neural network's performance.  This master's thesis shows that ANN consistently outperforms RF in PMI estimation, achieving an R2 of 0.68 and an MAE of 1.51 days compared to RF's R2 of 0.43 and MAE of 2.0 days across the entire PMI-interval. Additionally, feature selection indicates that only 35% of total metabolites are necessary for comparable results with maintained predictive accuracy. Furthermore, Principal Component Analysis (PCA) reveals that these informative metabolites are primarily located within a specific cluster on the first and second principal components (PC), suggesting a need for further research into the biological context of these metabolites.  In conclusion, the dataset has proven valuable for predicting PMI. This indicates significant potential for employing machine learning models in PMI estimation, thereby assisting forensic pathologists in determining the time of death. Notably, the model shows promise in surpassing current methods and filling crucial gaps in the field, representing an important step towards achieving accurate PMI estimations in forensic practice. This project suggests that machine learning will play a central role in assisting with determining time since death in the future.
653

A comparison of the performance of three multivariate methods in investigating the effects of province and power usage on the amount of five power modes in South Africa

Kanyama, Busanga Jerome 06 1900 (has links)
Researchers perform multivariate techniques MANOVA, discriminant analysis and factor analysis. The most common applications in social science are to identify and test the effects from the analysis. The use of this multivariate technique is uncommon in investigating the effects of power usage and Province in South Africa on the amounts of the five power modes. This dissertation discusses this issue, the methodology and practical problems of the three multivariate techniques. The author examines the applications of each technique in social public research and comparisons are made between the three multivariate techniques. This dissertation concludes with a discussion of both the concepts of the present multivariate techniques and the results found on the use of the three multivariate techniques in the energy household consumption. The author recommends focusing on the hypotheses of the study or typical questions surrounding of each technique to guide the researcher in choosing the appropriate analysis in the social research, as each technique has some strengths and limitations. / Statistics / M. Sc. (Statistics)
654

Improving Knowledge of Truck Fuel Consumption Using Data Analysis

Johnsen, Sofia, Felldin, Sarah January 2016 (has links)
The large potential of big data and how it has brought value into various industries have been established in research. Since big data has such large potential if handled and analyzed in the right way, revealing information to support decision making in an organization, this thesis is conducted as a case study at an automotive manufacturer with access to large amounts of customer usage data of their vehicles. The reason for performing an analysis of this kind of data is based on the cornerstones of Total Quality Management with the end objective of increasing customer satisfaction of the concerned products or services. The case study includes a data analysis exploring how and if patterns about what affects fuel consumption can be revealed from aggregated customer usage data of trucks linked to truck applications. Based on the case study, conclusions are drawn about how a company can use this type of analysis as well as how to handle the data in order to turn it into business value. The data analysis reveals properties describing truck usage using Factor Analysis and Principal Component Analysis. Especially one property is concluded to be important as it appears in the result of both techniques. Based on these properties the trucks are clustered using k-means and Hierarchical Clustering which shows groups of trucks where the importance of the properties varies. Due to the homogeneity and complexity of the chosen data, the clusters of trucks cannot be linked to truck applications. This would require data that is more easily interpretable. Finally, the importance for fuel consumption in the clusters is explored using model estimation. A comparison of Principal Component Regression (PCR) and the two regularization techniques Lasso and Elastic Net is made. PCR results in poor models difficult to evaluate. The two regularization techniques however outperform PCR, both giving a higher and very similar explained variance. The three techniques do not show obvious similarities in the models and no conclusions can therefore be drawn concerning what is important for fuel consumption. During the data analysis many problems with the data are discovered, which are linked to managerial and technical issues of big data. This leads to for example that some of the parameters interesting for the analysis cannot be used and this is likely to have an impact on the inability to get unanimous results in the model estimations. It is also concluded that the data was not originally intended for this type of analysis of large populations, but rather for testing and engineering purposes. Nevertheless, this type of data still contains valuable information and can be used if managed in the right way. From the case study it can be concluded that in order to use the data for more advanced analysis a big-data plan is needed at a strategic level in the organization. The plan summarizes the suggested solution for the managerial issues of the big data for the organization. This plan describes how to handle the data, how the analytic models revealing the information should be designed and the tools and organizational capabilities needed to support the people using the information.
655

Αναγνώριση βασικών κινήσεων του χεριού με χρήση ηλεκτρομυογραφήματος / Recognition of basic hand movements using electromyography

Σαψάνης, Χρήστος 13 October 2013 (has links)
Ο στόχος αυτής της εργασίας ήταν η αναγνώριση έξι βασικών κινήσεων του χεριού με χρήση δύο συστημάτων. Όντας θέμα διεπιστημονικού επιπέδου έγινε μελέτη της ανατομίας των μυών του πήχη, των βιοσημάτων, της μεθόδου της ηλεκτρομυογραφίας (ΗΜΓ) και μεθόδων αναγνώρισης προτύπων. Παράλληλα, το σήμα περιείχε αρκετό θόρυβο και έπρεπε να αναλυθεί, με χρήση του EMD, να εξαχθούν χαρακτηριστικά αλλά και να μειωθεί η διαστασιμότητά τους, με χρήση των RELIEF και PCA, για βελτίωση του ποσοστού επιτυχίας ταξινόμησης. Στο πρώτο μέρος γίνεται χρήση συστήματος ΗΜΓ της Delsys αρχικά σε ένα άτομο και στη συνέχεια σε έξι άτομα με το κατά μέσο όρο επιτυχημένης ταξινόμησης, για τις έξι αυτές κινήσεις, να αγγίζει ποσοστά άνω του 80%. Το δεύτερο μέρος περιλαμβάνει την κατασκευή αυτόνομου συστήματος ΗΜΓ με χρήση του Arduino μικροελεγκτή, αισθητήρων ΗΜΓ και ηλεκτροδίων, τα οποία είναι τοποθετημένα σε ένα ελαστικό γάντι. Τα αποτελέσματα ταξινόμησης σε αυτή την περίπτωση αγγίζουν το 75%. / The aim of this work was to identify six basic movements of the hand using two systems. Being an interdisciplinary topic, there has been conducted studying in the anatomy of forearm muscles, biosignals, the method of electromyography (EMG) and methods of pattern recognition. Moreover, the signal contained enough noise and had to be analyzed, using EMD, to extract features and to reduce its dimensionality, using RELIEF and PCA, to improve the success rate of classification. The first part uses an EMG system of Delsys initially for an individual and then for six people with the average successful classification, for these six movements at rates of over 80%. The second part involves the construction of an autonomous system EMG using an Arduino microcontroller, EMG sensors and electrodes, which are arranged in an elastic glove. Classification results in this case reached 75% of success.
656

Modélisation et analyses cinématiques de l'épaule lors de levers de charges en hauteur

Desmoulins, Landry 10 1900 (has links)
Thèse de doctorat à mi-chemin entre la recherche fondamentale et appliquée. Les champs disciplinaires sont principalement la biomécanique, l'ergonomie physique ou encore l'anatomie. Réalisé en cotutelle avec le professeur Paul Allard et Mickael Begon. / An occupation that requires handling loads combined with large elevation of the arms is associated with the occurrence of shoulders musculoskeletal disorder. The analysis of these joint movements is essential because it helps to quantify the stress applied to the musculoskeletal structures. This thesis provides an innovative model which allows the estimation of the shoulder complex kinematics and used it to analyze the joints kinematics during lifting tasks. It is organized into three sub-objectives. The first aim is the development and validation of a kinematic model the most representative as possible of the shoulder complex anatomy while correcting soft tissue artifacts through the use of global optimization. This model included a scapulothoracic closed loop, which constrains a scapular dot contact to be coincident with thoracic gliding plane modeled by a subject-specific ellipsoid. In the validation process, the reference model used the gold standard for direct measurements of bone movements. In dynamic movements, the closed loop model developed generates barely more kinematic errors that errors obtained for the study of standard movements by existing models. The second aim is to detect and quantify the shoulder articular movements influenced by the combined effects of two risk factors: task height and load weight. The results indicate that many peaks of joint angles are influenced by the interaction of height and weight. According to the different initial and deposits heights when the weight increases, the kinematics changes are substantial, in number and magnitude. The kinematic strategies of participants are more consistent when the weight of load increase for initial height lift at hips level compared to shoulders level, and for a deposit at eye level compared to shoulders. The third aim is to investigate the magnitude and temporality of the maximum peak vertical acceleration of the box. The significant joints movements are characterized with a principal component analysis of joint angle values collected at this instant. In particular, this study highlights that elbow flexion and thoraco-humeral elevation are two correlated invariant joint movements to all lifting tasks whatever the initial and deposit height, and weight of the load. The realism of the developed shoulder model and kinematics analyzes open perspectives in occupational biomechanics and contribute to risk prevention efforts in health and safety. / Une activité professionnelle qui exige de manipuler des charges combinée à de grandes élévations des bras augmente les chances de développer un trouble musculo-squelettique aux épaules. L’analyse de ces mouvements articulaires est essentielle car elle contribue à quantifier les contraintes appliquées aux structures musculo-squelettiques. Cette thèse propose un modèle innovant qui permet l’estimation de la cinématique du complexe de l’épaule, et l’utilise ensuite afin d’analyser la cinématique de levers de charge. Elle s’organise en trois sous-objectifs. Le premier concerne le développement et la validation d’un modèle cinématique le plus représentatif possible de l’anatomie du complexe de l’épaule tout en corrigeant les artéfacts des tissus mous par une optimisation multi-segmentaire. Ce modèle avec une fermeture de boucle scapulo-thoracique, impose à un point de contact scapulaire d’être coïncident au plan de glissement thoracique modélisé par un ellipsoïde mis à l’échelle pour chaque sujet. Le modèle qui a été utilisé comme référence lors des comparaisons du processus de validation bénéficie du « gold standard » de mesures directes des mouvements osseux. Le modèle développé en boucle fermée génère à peine plus d’erreurs cinématiques lors de mouvements dynamiques que les erreurs obtenues par les modèles existants pour l’étude de mouvements standards. Le second identifie et quantifie les mouvements articulaires de l’épaule influencés par la combinaison des effets de deux facteurs de risques : les hauteurs importantes d’agencement de la tâche (hauteurs de saisie et de dépôt) et les masses de charges (6 kg, 12 kg et 18 kg). Les résultats indiquent qu’il existe de nombreux pics d’angles articulaires qui sont influencés par l’interaction des deux effets. Lorsque la masse augmente, les modifications cinématiques sont plus importantes, en nombre et en amplitude, selon les différentes hauteurs de saisies et de dépôts de la charge. Les participants varient peu leur mode opératoire pour une saisie à hauteur des hanches en comparaison des épaules, et pour un dépôt à hauteur des yeux en comparaison aux épaules avec une charge plus lourde. Un troisième s’intéresse au pic maximal d’accélération verticale de la charge dans son intensité et sa temporalité. Basée sur une analyse en composante principale des valeurs d’angles articulaires à cet instant, elle permet de caractériser les mouvements articulaires significatifs. Cette étude met notamment en évidence que la flexion du coude et l’élévation thoraco-humérale sont deux mouvements articulaires corrélés invariants à toutes les tâches de lever en hauteur quelles que soient la hauteur de dépôt et la masse de la charge. Le souci de réalisme du modèle développé et les analyses cinématiques menées ouvrent des perspectives en biomécanique occupationnelle et participent à l’effort de prévention des risques en santé et sécurité.
657

Oxidation of terpenes in indoor environments : A study of influencing factors

Pommer, Linda January 2003 (has links)
In this thesis the oxidation of monoterpenes by O3 and NO2 and factors that influenced the oxidation were studied. In the environment both ozone (O3) and nitrogen dioxide (NO2) are present as oxidising gases, which causes sampling artefacts when using Tenax TA as an adsorbent to sample organic compounds in the air. A scrubber was developed to remove O3 and NO2 prior to the sampling tube, and artefacts during sampling were minimised when using the scrubber. The main organic compounds sampled in this thesis were two monoterpenes, alfa-pinene and delta-3-carene, due to their presence in both indoor and outdoor air. The recovery of the monoterpenes through the scrubber varied between 75-97% at relative humidities of 15-75%. The reactions of alfa-pinene and delta-3-carene with O 3, NO2 and nitric oxide (NO) at different relative humidities (RHs) and reaction times were studied in a dark reaction chamber. The experiments were planned and performed according to an experimental design were the factors influencing the reaction (O3, NO2, NO, RH and reaction times) were varied between high and low levels. In the experiments up to 13% of the monoterpenes reacted when O3, NO2, and reaction time were at high levels, and NO, and RH were at low levels. In the evaluation eight and seven factors (including both single and interaction factors) were found to influence the amount of alfa-pinene and delta-3-carene reacted, respectively. The three most influencing factors for both of the monoterpenes were the O 3 level, the reaction time, and the RH. Increased O3 level and reaction time increased the amount of monoterpene reacted, and increased RH decreased the amount reacted. A theoretical model of the reactions occurring in the reaction chamber was created. The amount of monoterpene reacted at different initial settings of O3, NO2, and NO were calculated, as well as the influence of different reaction pathways, and the concentrations of O3 and NO2, and NO at specific reaction times. The results of the theoretical model were that the reactivity of the gas mixture towards alfa-pinene and delta-3-carene was underestimated. But, the calculated concentrations of O3, NO2, and NO in the theoretical model were found to correspond to a high degree with experimental results performed under similar conditions. The possible associations between organic compounds in indoor air, building variables and the presence of sick building syndrome were studied using principal component analysis. The most complex model was able to separate 71% of the “sick” buildings from the “healthy” buildings. The most important variables that separated the “sick” buildings from the “healthy” buildings were a more frequent occurrence or a higher concentration of compounds with shorter retention times in the “sick” buildings. The outcome of this thesis could be summarised as follows; - - - -
658

Chimiométrie appliquée à la spectroscopie de plasma induit par laser (LIBS) et à la spectroscopie terahertz / Chemometric applied to laser-induced breakdown spectroscopy (LIBS) and terahertz spectroscopy

El Haddad, Josette 13 December 2013 (has links)
L’objectif de cette thèse était d’appliquer des méthodes d’analyse multivariées au traitement des données provenant de la spectroscopie de plasma induit par laser (LIBS) et de la spectroscopie térahertz (THz) dans le but d’accroître les performances analytiques de ces techniques.Les spectres LIBS provenaient de campagnes de mesures directes sur différents sites géologiques. Une approche univariée n’a pas été envisageable à cause d’importants effets de matrices et c’est pour cela qu’on a analysé les données provenant des spectres LIBS par réseaux de neurones artificiels (ANN). Cela a permis de quantifier plusieurs éléments mineurs et majeurs dans les échantillons de sol avec un écart relatif de prédiction inférieur à 20% par rapport aux valeurs de référence, jugé acceptable pour des analyses sur site. Dans certains cas, il a cependant été nécessaire de prendre en compte plusieurs modèles ANN, d’une part pour classer les échantillons de sol en fonction d’un seuil de concentration et de la nature de leur matrice, et d’autre part pour prédire la concentration d’un analyte. Cette approche globale a été démontrée avec succès dans le cas particulier de l’analyse du plomb pour un échantillon de sol inconnu. Enfin, le développement d’un outil de traitement par ANN a fait l’objet d’un transfert industriel.Dans un second temps, nous avons traité des spectres d’absorbance terahertz. Ce spectres provenaient de mesures d’absorbance sur des mélanges ternaires de Fructose-Lactose-acide citrique liés par du polyéthylène et préparés sous forme de pastilles. Une analyse semi-quantitative a été réalisée avec succès par analyse en composantes principales (ACP). Puis les méthodes quantitatives de régression par moindres carrés partiels (PLS) et de réseaux de neurons artificiels (ANN) ont permis de prédire les concentrations de chaque constituant de l’échantillon avec une valeur d’erreur quadratique moyenne inférieure à 0.95 %. Pour chaque méthode de traitement, le choix des données d’entrée et la validation de la méthode ont été discutés en détail. / The aim of this work was the application of multivariate methods to analyze spectral data from laser-induced breakdown spectroscopy (LIBS) and terahertz (THz) spectroscopy to improve the analytical ability of these techniques.In this work, the LIBS data were derived from on-site measurements of soil samples. The common univariate approach was not efficient enough for accurate quantitative analysis and consequently artificial neural networks (ANN) were applied. This allowed quantifying several major and minor elements into soil samples with relative error of prediction lower than 20% compared to reference values. In specific cases, a single ANN model didn’t allow to successfully achieving the quantitative analysis and it was necessary to exploit a series of ANN models, either for classification purpose against a concentration threshold or a matrix type, or for quantification. This complete approach based on a series of ANN models was efficiently applied to the quantitative analysis of unknown soil samples. Based on this work, a module of data treatment by ANN was included into the software Analibs of the IVEA company. The second part of this work was focused on the data treatment of absorbance spectra in the terahertz range. The samples were pressed pellets of mixtures of three products, namely fructose, lactose and citric acid with polyethylene as binder. A very efficient semi-quantitative analysis was conducted by using principal component analysis (PCA). Then, quantitative analyses based on partial least squares regression (PLS) and ANN allowed quantifying the concentrations of each product with a root mean square error (RMSE) lower than 0.95 %. All along this work on data processing, both the selection of input data and the evaluation of each model have been studied in details.
659

[en] RISK ANALYSIS IN A PORTFOLIO OF COMMODITIES: A CASE STUDY / [pt] ANÁLISE DE RISCOS NUM PORTFÓLIO DE COMMODITIES: UM ESTUDO DE CASO

LUCIANA SCHMID BLATTER MOREIRA 23 March 2015 (has links)
[pt] Um dos principais desafios no mercado financeiro é simular preços mantendo a estrutura de correlação entre os inúmeros ativos de um portfólio. Análise de Componentes Principais emerge como uma solução para este último problema. Além disso, dada a incerteza presente nos mercados de commodities de derivados de petróleo, o investidor quer proteger seus ativos de perdas potenciais. Como uma alternativa a esse problema, a otimização de várias medidas de risco, como Value-at-risk, Conditional Value-at-risk e medida Ômega, são ferramentas financeiras importantes. Além disso, o backtest é amplamente utilizado para validar e analisar o desempenho do método proposto. Nesta dissertação, trabalharemos com um portfólio de commodities de petróleo. Vamos unir diferentes técnicas e propor uma nova metodologia que consiste na diminuição da dimensão do portfólio proposto. O passo seguinte é simular os preços dos ativos na carteira e, em seguida, otimizar a alocação do portfólio de commodities de derivados do petróleo. Finalmente, vamos usar técnicas de backtest, a fim de validar nosso método. / [en] One of the main challenges in the financial market is to simulate prices keeping the correlation structure among numerous assets. Principal Component Analysis emerges as solution to the latter problem. Also, given the uncertainty present in commodities markets, an investor wants to protect his/her assets from potential losses, so as an alternative, the optimization of various risk measures, such as Value-at-risk, Conditional Value-at-risk and Omega Ratio, are important financial tools. Additionally, the backtest is widely used to validate and analyze the performance of the proposed methodology. In this dissertation, we will work with a portfolio of oil commodities. We will put together different techniques and propose a new methodology that consists in the (potentially) decrease the dimension of the proposed portfolio. The following step is to simulate the prices of the assets in the portfolio and then optimize the allocation of the portfolio of oil commodities. Finally, we will use backtest techniques in order to validate our method.
660

Identificação de faces humanas através de PCA-LDA e redes neurais SOM / Identification of human faces based on PCA - LDA and SOM neural networks

Santos, Anderson Rodrigo dos 29 September 2005 (has links)
O uso de dados biométricos da face para verificação automática de identidade é um dos maiores desafios em sistemas de controle de acesso seguro. O processo é extremamente complexo e influenciado por muitos fatores relacionados à forma, posição, iluminação, rotação, translação, disfarce e oclusão de características faciais. Hoje existem muitas técnicas para se reconhecer uma face. Esse trabalho apresenta uma investigação buscando identificar uma face no banco de dados ORL com diferentes grupos de treinamento. É proposto um algoritmo para o reconhecimento de faces baseado na técnica de subespaço LDA (PCA + LDA) utilizando uma rede neural SOM para representar cada classe (face) na etapa de classificação/identificação. Aplicando o método do subespaço LDA busca-se extrair as características mais importantes na identificação das faces previamente conhecidas e presentes no banco de dados, criando um espaço dimensional menor e discriminante com relação ao espaço original. As redes SOM são responsáveis pela memorização das características de cada classe. O algoritmo oferece maior desempenho (taxas de reconhecimento entre 97% e 98%) com relação às adversidades e fontes de erros que prejudicam os métodos de reconhecimento de faces tradicionais. / The use of biometric technique for automatic personal identification is one of the biggest challenges in the security field. The process is complex because it is influenced by many factors related to the form, position, illumination, rotation, translation, disguise and occlusion of face characteristics. Now a days, there are many face recognition techniques. This work presents a methodology for searching a face in the ORL database with some different training sets. The algorithm for face recognition was based on sub-space LDA (PCA + LDA) technique using a SOM neural net to represent each class (face) in the stage of classification/identification. By applying the sub-space LDA method, we extract the most important characteristics in the identification of previously known faces that belong to the database, creating a reduced and more discriminated dimensional space than the original space. The SOM nets are responsible for the memorization of each class characteristic. The algorithm offers great performance (recognition rates between 97% and 98%) considering the adversities and sources of errors inherent to the traditional methods of face recognition.

Page generated in 0.1342 seconds