• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 355
  • 30
  • 21
  • 13
  • 12
  • 12
  • 12
  • 12
  • 12
  • 12
  • 10
  • 5
  • 1
  • 1
  • Tagged with
  • 511
  • 511
  • 511
  • 234
  • 193
  • 140
  • 112
  • 88
  • 76
  • 63
  • 60
  • 57
  • 57
  • 55
  • 49
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
471

The SGE framework discovering spatio-temporal patterns in biological systems with spiking neural networks (S), a genetic algorithm (G) and expert knowledge (E) /

Sichtig, Heike. January 2009 (has links)
Thesis (Ph. D.)--State University of New York at Binghamton, Thomas J. Watson School of Engineering and Applied Science, Department of Bioengineering, Biomedical Engineering, 2009. / Includes bibliographical references.
472

Phonene-based topic spotting on the switchboard corpus

Theunissen, M. W. (Marthinus Wilhelmus) 04 1900 (has links)
Thesis (MScEng)--Stellenbosch University, 2002. / ENGLISH ABSTRACT: The field of topic spotting in conversational speech deals with the problem of identifying "interesting" conversations or speech extracts contained within large volumes of speech data. Typical applications where the technology can be found include the surveillance and screening of messages before referring to human operators. Closely related methods can also be used for data-mining of multimedia databases, literature searches, language identification, call routing and message prioritisation. The first topic spotting systems used words as the most basic units. However, because of the poor performance of speech recognisers, a large amount of topic-specific hand-transcribed training data is needed. It is for this reason that researchers started concentrating on methods using phonemes instead, because the errors then occur on smaller, and therefore less important, units. Phoneme-based methods consequently make it feasible to use computer generated transcriptions as training data. Building on word-based methods, a number of phoneme-based systems have emerged. The two most promising ones are the Euclidean Nearest Wrong Neighbours (ENWN) algorithm and the newly developed Stochastic Method for the Automatic Recognition of Topics (SMART). Previous experiments on the Oregon Graduate Institute of Science and Technology's Multi-Language Telephone Speech Corpus suggested that SMART yields a large improvement over ENWN which outperformed competing phoneme-based systems in evaluations. However, the small amount of data available for these experiments meant that more rigorous testing was required. In this research, the algorithms were therefore re-implemented to run on the much larger Switchboard Corpus. Subsequently, a substantial improvement of SMART over ENWN was observed, confirming the result that was previously obtained. In addition to this, an investigation was conducted into the improvement of SMART. This resulted in a new counting strategy with a corresponding improvement in performance. / AFRIKAANSE OPSOMMING: Die veld van onderwerp-herkenning in spraak het te doen met die probleem om "interessante" gesprekke of spraaksegmente te identifiseer tussen groot hoeveelhede spraakdata. Die tegnologie word tipies gebruik om gesprekke te verwerk voor dit verwys word na menslike operateurs. Verwante metodes kan ook gebruik word vir die ontginning van data in multimedia databasisse, literatuur-soektogte, taal-herkenning, oproep-kanalisering en boodskap-prioritisering. Die eerste onderwerp-herkenners was woordgebaseerd, maar as gevolg van die swak resultate wat behaal word met spraak-herkenners, is groot hoeveelhede hand-getranskribeerde data nodig om sulke stelsels af te rig. Dit is om hierdie rede dat navorsers tans foneemgebaseerde benaderings verkies, aangesien die foute op kleiner, en dus minder belangrike, eenhede voorkom. Foneemgebaseerde metodes maak dit dus moontlik om rekenaargegenereerde transkripsies as afrigdata te gebruik. Verskeie foneemgebaseerde stelsels het verskyn deur voort te bou op woordgebaseerde metodes. Die twee belowendste stelsels is die "Euclidean Nearest Wrong Neighbours" (ENWN) algoritme en die nuwe "Stochastic Method for the Automatic Recognition of Topics" (SMART). Vorige eksperimente op die "Oregon Graduate Institute of Science and Technology's Multi-Language Telephone Speech Corpus" het daarop gedui dat die SMART algoritme beter vaar as die ENWN-stelsel wat ander foneemgebaseerde algoritmes geklop het. Die feit dat daar te min data beskikbaar was tydens die eksperimente het daarop gedui dat strenger toetse nodig was. Gedurende hierdie navorsing is die algoritmes dus herimplementeer sodat eksperimente op die "Switchboard Corpus" uitgevoer kon word. Daar is vervolgens waargeneem dat SMART aansienlik beter resultate lewer as ENWN en dit het dus die geldigheid van die vorige resultate bevestig. Ter aanvulling hiervan, is 'n ondersoek geloods om SMART te probeer verbeter. Dit het tot 'n nuwe telling-strategie gelei met 'n meegaande verbetering in resultate.
473

Real-time Software Hand Pose Recognition using Single View Depth Images

Alberts, Stefan Francois 04 1900 (has links)
Thesis (MEng)--Stellenbosch University, 2014. / ENGLISH ABSTRACT: The fairly recent introduction of low-cost depth sensors such as Microsoft’s Xbox Kinect has encouraged a large amount of research on the use of depth sensors for many common Computer Vision problems. Depth images are advantageous over normal colour images because of how easily objects in a scene can be segregated in real-time. Microsoft used the depth images from the Kinect to successfully separate multiple users and track various larger body joints, but has difficulty tracking smaller joints such as those of the fingers. This is a result of the low resolution and noisy nature of the depth images produced by the Kinect. The objective of this project is to use the depth images produced by the Kinect to remotely track the user’s hands and to recognise the static hand poses in real-time. Such a system would make it possible to control an electronic device from a distance without the use of a remote control. It can be used to control computer systems during computer aided presentations, translate sign language and to provide more hygienic control devices in clean rooms such as operating theatres and electronic laboratories. The proposed system uses the open-source OpenNI framework to retrieve the depth images from the Kinect and to track the user’s hands. Random Decision Forests are trained using computer generated depth images of various hand poses and used to classify the hand regions from a depth image. The region images are processed using a Mean-Shift based joint estimator to find the 3D joint coordinates. These coordinates are finally used to classify the static hand pose using a Support Vector Machine trained using the libSVM library. The system achieves a final accuracy of 95.61% when tested against synthetic data and 81.35% when tested against real world data. / AFRIKAANSE OPSOMMING: Die onlangse bekendstelling van lae-koste diepte sensors soos Microsoft se Xbox Kinect het groot belangstelling opgewek in navorsing oor die gebruik van die diepte sensors vir algemene Rekenaarvisie probleme. Diepte beelde maak dit baie eenvoudig om intyds verskillende voorwerpe in ’n toneel van mekaar te skei. Microsoft het diepte beelde van die Kinect gebruik om verskeie persone en hul ledemate suksesvol te volg. Dit kan egter nie kleiner ledemate soos die vingers volg nie as gevolg van die lae resolusie en voorkoms van geraas in die beelde. Die doel van hierdie projek is om die diepte beelde (verkry vanaf die Kinect) te gebruik om intyds ’n gebruiker se hande te volg oor ’n afstand en die statiese handgebare te herken. So ’n stelsel sal dit moontlik maak om elektroniese toestelle oor ’n afstand te kan beheer sonder die gebruik van ’n afstandsbeheerder. Dit kan gebruik word om rekenaarstelsels te beheer gedurende rekenaargesteunde aanbiedings, vir die vertaling van vingertaal en kan ook gebruik word as higiëniese, tasvrye beheer toestelle in skoonkamers soos operasieteaters en elektroniese laboratoriums. Die voorgestelde stelsel maak gebruik van die oopbron OpenNI raamwerk om die diepte beelde vanaf die Kinect te lees en die gebruiker se hande te volg. Lukrake Besluitnemingswoude ("Random Decision Forests") is opgelei met behulp van rekenaar gegenereerde diepte beelde van verskeie handgebare en word gebruik om die verskeie handdele vanaf ’n diepte beeld te klassifiseer. Die 3D koördinate van die hand ledemate word dan verkry deur gebruik te maak van ’n Gemiddelde-Afset gebaseerde ledemaat herkenner. Hierdie koördinate word dan gebruik om die statiese handgebaar te klassifiseer met behulp van ’n Steun-Vektor Masjien ("Support Vector Machine"), opgelei met behulp van die libSVM biblioteek. Die stelsel behaal ’n finale akkuraatheid van 95.61% wanneer dit getoets word teen sintetiese data en 81.35% wanneer getoets word teen werklike data.
474

GrailKnights : an automaton mass manipulation package for enhanced pattern analysis

Du Preez, Hercule 03 1900 (has links)
Thesis (MSC (Mathematical Sciences. Computer Science))--Stellenbosch University, 2008. / This thesis describes the design and implementation of an application names GrailKnights that allows for the mass manipulation of automata, with added visual pattern analysis features. It comprises a database-driven backend for automata storage, and a graphical user interface that allows for filtering the automata selected from the database with visual interpretation of visible patterns over the resulting automata.
475

La rétro-conception de composants mécaniques par une approche "concevoir pour fabriquer" / Reverse engineering for manufacturing (REFM)

Ali, Salam 01 July 2015 (has links)
Le processus de rétro-conception (RC), dans la littérature, permet de retrouver un modèle CAO pauvrement paramétré, les modifications sont difficilement réalisables. C’est à partir de ce dernier et d’un logiciel FAO (Fabrication Assistée par Ordinateur) qu’une gamme de fabrication est générée. Cette thèse propose une méthodologie de RC de composants mécaniques dans un contexte de fabrication, nommée « Reverse Engineering For Manufacturing ». Une gamme de fabrication incluant les informations de type opérations d’usinage, posages, phases… est obtenue. Une fois cette gamme générée, elle sera stockée afin d’être réutilisée sur des cas similaires. L’intégration des contraintes métier dans le processus de RC fait penser aux concepts de Design For Manufacturing (DFM) et Knowledge Based Engineering (KBE). La réutilisation de stratégies d’usinage afin de supporter le contexte récurrent fait penser aux travaux sur le « Shape Matching ». En effet, des travaux sur les descripteurs topologiques permettent de reconnaitre, après numérisation, la nature d’une pièce et ainsi appliquer une stratégie d’usinage existante. Cette thèse propose donc un rapprochement entre deux domaines de recherches: la reconnaissance de formes (Shape Matching) et les méthodologies de gestion des données techniques (DFM et KBE). Cette thèse vise à proposer une nouvelle approche de RC dans un contexte d’usinage, et à développer un démonstrateur de RC qui permet de gérer les aspects récurrents de la RC en réutilisant des cas d’études connus / Reverse Engineering (RE) process, in the literature, allows to find a poorly parametrized CAD model, the changes are very difficult. It is from this CAD model and a CAM (Computer Aided Manufacturing) software that a CAPP (Computer Aided Process Planning) model is generated. This thesis proposes a RE methodology of mechanical components in a manufacturing context, called “Reverse Engineering For Manufacturing”. A CAPP model including information like machining operations, fixtures, steps… is obtained. Once this CAPP generated, it will be stored for reuse in similar cases.The integration of design intents in the RE process requires the use of Design For Manufacturing (DFM) and Knowledge Based Engineering (KBE) concepts. The reuse of machining strategies to support the recurrent context requires the use of Shape Matching works. Indeed, works on topological descriptors allow to recognize, after scanning, the nature of a part and thus apply an existing machining strategy. This thesis proposes to combine two research domains: Shape Matching and technical data management methodologies (DFM and KBE). This thesis aims to propose a new RE approach in a machine context, and to develop a RE viewer for managing recurrent aspects of RE by reusing known case studies
476

One-class classification for cyber intrusion detection in industrial systems / Classification mono-classe pour la détection des cyber-intrusions dans les systèmes industriels

Nader, Patric 24 September 2015 (has links)
La sécurité des infrastructures critiques a suscité l'attention des chercheurs récemment avec l'augmentation du risque des cyber-attaques et des menaces terroristes contre ces systèmes. La majorité des infrastructures est contrôlée par des systèmes SCADA (Supervisory Control And Data Acquisition) permettant le contrôle à distance des processus industriels, comme les réseaux électriques, le transport de gaz, la distribution d'eau potable, les centrales nucléaires, etc. Les systèmes traditionnels de détection d'intrusions sont incapables de détecter les nouvelles attaques ne figurant pas dans leurs bases de données. L'objectif de cette thèse est d'apporter une aide supplémentaire à ces systèmes pour assurer une meilleure protection contre les cyber-attaques.La complexité et la diversité des attaques rendent leur modélisation difficile. Pour surmonter cet obstacle, nous utilisons des méthodes d'apprentissage statistique mono-classes. Ces méthodes élaborent une fonction de décision à partir de données d'apprentissage, pour classer les nouveaux échantillons en données aberrantes ou données normales. La fonction de décision définie l’enveloppe d’une région de l’espace de données contenant la majeure partie des données d’apprentissage. Cette thèse propose des méthodes de classification mono-classe, des formulations parcimonieuses de ces méthodes, et une méthode en ligne pour la détection temps réel. Les performances de ces méthodes sont montrées sur des données benchmark de différents types d’infrastructures critiques / The security of critical infrastructures has been an interesting topic recently with the increasing risk of cyber-attacks and terrorist threats against these systems. The majority of these infrastructures is controlled via SCADA (Supervisory Control And Data Acquisition) systems, which allow remote monitoring of industrial processes such as electrical power grids, gas pipelines, water distribution systems, wastewater collection systems, nuclear power plants, etc. Traditional intrusion detection systems (IDS) cannot detect new types of attacks not listed in their databases, so they cannot ensure maximum protection for these infrastructures.The objective of this thesis is to provide additional help to IDS to ensure better protection for industrial systems against cyber-attacks and intrusions. The complexity of studied systems and the diversity of attacks make modeling these attacks very difficult. To overcome this difficulty, we use machine learning, especially one-class classification. Based on training samples, these methods develop decision rules to classify new samples as outliers or normal ones. This dissertation proposes specific one-class classification approaches, sparse formulations of these approaches, and an online approach to improve the real-time detection. The relevance of these approaches is illustrated on benchmark data from three different types of critical infrastructures
477

MMD and Ward criterion in a RKHS : application to Kernel based hierarchical agglomerative clustering / Maximum Dean Discrepancy et critère de Ward dans un RKHS : application à la classification hierarchique à noyau

Li, Na 01 December 2015 (has links)
La classification non supervisée consiste à regrouper des objets afin de former des groupes homogènes au sens d’une mesure de similitude. C’est un outil utile pour explorer la structure d’un ensemble de données non étiquetées. Par ailleurs, les méthodes à noyau, introduites initialement dans le cadre supervisé, ont démontré leur intérêt par leur capacité à réaliser des traitements non linéaires des données en limitant la complexité algorithmique. En effet, elles permettent de transformer un problème non linéaire en un problème linéaire dans un espace de plus grande dimension. Dans ce travail, nous proposons un algorithme de classification hiérarchique ascendante utilisant le formalisme des méthodes à noyau. Nous avons tout d’abord recherché des mesures de similitude entre des distributions de probabilité aisément calculables à l’aide de noyaux. Parmi celles-ci, la maximum mean discrepancy a retenu notre attention. Afin de pallier les limites inhérentes à son usage, nous avons proposé une modification qui conduit au critère de Ward, bien connu en classification hiérarchique. Nous avons enfin proposé un algorithme itératif de clustering reposant sur la classification hiérarchique à noyau et permettant d’optimiser le noyau et de déterminer le nombre de classes en présence / Clustering, as a useful tool for unsupervised classification, is the task of grouping objects according to some measured or perceived characteristics of them and it has owned great success in exploring the hidden structure of unlabeled data sets. Kernel-based clustering algorithms have shown great prominence. They provide competitive performance compared with conventional methods owing to their ability of transforming nonlinear problem into linear ones in a higher dimensional feature space. In this work, we propose a Kernel-based Hierarchical Agglomerative Clustering algorithms (KHAC) using Ward’s criterion. Our method is induced by a recently arisen criterion called Maximum Mean Discrepancy (MMD). This criterion has firstly been proposed to measure difference between different distributions and can easily be embedded into a RKHS. Close relationships have been proved between MMD and Ward's criterion. In our KHAC method, selection of the kernel parameter and determination of the number of clusters have been studied, which provide satisfactory performance. Finally an iterative KHAC algorithm is proposed which aims at determining the optimal kernel parameter, giving a meaningful number of clusters and partitioning the data set automatically
478

Reconhecimento automático de impressões digitais utilizando wavelets e redes neuronais artificiais

Vieira Neto, Hugo 30 April 1998 (has links)
CNPq, CAPES / Este trabalho propõe uma nova abordagem para o reconhecimento automático de impressões digitais. Diferentemente das técnicas tradicionais, baseadas na análise de peculiaridades relativas aos padrões de impressão digital e em algoritmos de busca dedicados especialmente a esse fim, é sugerida uma metodologia com base em ferramentas de representação de sinais e em modelos de classificação conexionistas. Basicamente, a metodologia aqui proposta fundamenta-se em duas técnicas principais: o padrão de compressão wavelet para impressões digitais especificado pelo FBI como método de extração de características; e modelos de Redes Neuronais Artificiais que utilizam técnicas lineares de treinamento como método de classificação dos padrões obtidos. O principal objetivo do método sugerido é o desenvolvimento de um sistema de reconhecimento independente de técnicas específicas de análise e processamento de imagens de impressões digitais, procurando-se a obtenção de baixos índices de falsa aceitação e rejeição, sendo a prioridade para o índice de falsa aceitação. As metodologias e resultados obtidos nos experimentos executados são apresentados, bem como suas respectivas análises e ainda algumas propostas para trabalhos futuros e melhorias. Alguns aspectos pertinentes ao hardware de aquisição de imagens e algumas técnicas de processamento digital de imagens também são apresentadas. / This work proposes a new approach for automatic fingerprint recognition. Differently from traditional techniques, based on the analysis of peculiarities existing in fingerprint patterns and dedicated search algorithms for that purpose, it is suggested a methodology centred in signal representation tools and connectionist classification models. Basically, the methodology proposed here supports itself in two main techniques: the FBI’s Wavelet Scalar Quantization standard for fingerprint image compression as method for the feature extraction; and Artificial Neural Networks models that use linear training techniques as method for the classification of the resulting patterns. The main objective of the suggested method is the development of a recognition system independent from specific fingerprint image analysis techniques, looking for low false acceptance and false rejection rates, with priority for the false acceptance rate. The methodologies and results achieved in the executed experiments are presented, as well as their respective analysis and yet some proposals for future work and enhancements. Some aspects pertinent to the image acquisition hardware and some digital image processing techniques are also presented.
479

Desenvolvimento de uma armband para captura de sinais eletromiográficos para reconhecimento de movimentos / Development of an armband to capture of electromyography signals for movement recognition

Mendes Júnior, José Jair Alves 12 December 2016 (has links)
Esta dissertação apresenta o desenvolvimento de um sistema em forma de armband para a captura de sinais de eletromiográficos de superfície para o reconhecimento de movimentos do braço. São apresentadas todas as suas etapas de projeto, desde a construção física, projeto de circuitos, sistema de aquisição, processamento e classificação por meio de Redes Neurais Artificiais. Foi construído um bracelete contendo oito canais para a captação do sinal de eletromiografia e um sistema auxiliar (giroscópio) de referência foi utilizado para indicar o instante em que o braço foi movimentado. Foram adquiridos dados dos grupos musculares do bíceps e do tríceps. Por meio da fusão de dados de sensores, os sinais foram processados por meio de rotinas no software LabVIEWTM. Após a extração de características do sinal, as amostras foram encaminhadas para uma Rede Neural Multi-Layer Perceptron para a classificação dos movimentos de flexão e extensão do braço. A mesma armband foi inserida na região do antebraço e os sinais de eletromiografia foram comparados com os sinais obtidos pelo dispositivo comercial MyoTM. O sistema apresentou como resultado altas taxas de classificação, acima de 95% e os sinais obtidos na região do antebraço apresentaram semelhanças com os obtidos pelo dispositivo comercial. / This master thesis presents the development of an armband system to capture of superficial electromyography signals to arm movement recognition. All project steps, since the physical building, project of the circuits, acquisition system, processing and classification by Artificial Neural Networks are presented. An armband with eight channel to capture the electromyography signal was constructed and an auxiliary system (gyroscope) was used to indicate the instant when the arm was moved. The muscle acquired groups were the biceps and triceps. By sensor data fusion, the signals were processed by LabVIEWTM routines. After the signal characteristic extraction, the samples were forwarded to a Multi-Layer Perceptron Neural Network to movement classification of arm flexion and extension. The same armband was inserted on the forearm and the electromyography signals were compared with the signals obtained by the commercial device MyoTM. The system presented, as results, high classification rates, above of 95% and the obtained signals on the region of forearm showed similarities with the obtained ones by commercial device.
480

Metodologia para a captura, detecção e normalização de imagens faciais

Prodossimo, Flávio das Chagas 29 May 2013 (has links)
CAPES / O reconhecimento facial está se tornando uma tarefa comum com a evolução da tecnologia da informação. Este artefato pode ser utilizado na área de segurança, controlando acesso a lugares restritos, identificando pessoas que tenham cometido atos ilícitos, entre outros. Executar o reconhecimento facial é uma tarefa complexa e, para completar este processo, são implementadas etapas que compreendem: a captura de imagens faciais, a detecção de regiões de interesse, a normalização facial, a extração de características e o reconhecimento em si. Dentre estas, as três primeiras são tratadas neste trabalho, que tem como objetivo principal a normalização automática de faces. Tanto para a captura de imagens quanto para a normalização frontal existem normas internacionais que padronizam o procedimento de execução destas tarefas e que foram utilizadas neste trabalho. Além disto, algumas normas foram adaptadas para a construção de uma base de imagens faciais com o objetivo de auxiliar o processo de reconhecimento facial. Também foi criada uma nova metodologia para normalização de imagens faciais laterais, baseando-se nas normas da normalização frontal. Foram implementadas normalização semiautomática frontal, semiautomática lateral e automática lateral. Para a execução da normalização facial automática são necessários dois pontos de controle, os dois olhos, o que torna indispensável a execução da etapa de detecção de regiões de interesse. Neste trabalho, foram comparadas duas metodologias semelhantes para detecção. Primeiramente foi detectada uma região contendo ambos os olhos e, em seguida, dentro desta região, foram detectados cada um dos olhos de forma mais precisa. Para as duas metodologias foram utilizadas técnicas de processamento de imagens e reconhecimento de padrões. A primeira metodologia utiliza como filtro o Haar-Like Features em conjunto com a técnica de reconhecimento de padrões Adaptative Boosting. Sendo que as técnicas equivalentes no segundo algoritmo foram o Local Binary Pattern e o Support Vector Machines, respectivamente. Na segunda metodologia também foi utilizado um algoritmo de otimização de busca baseado em vizinhança, o Variable Neighborhood Search. Os estudos resultaram em uma base com 3726 imagens, mais uma base normalizada frontal com 966 imagens e uma normalizada lateral com 276 imagens. A detecção de olhos resultou, nos melhores testes, em aproximadamente 99% de precisão para a primeira metodologia e 95% para a segunda, sendo que em todos os testes a primeira foi o mais rápida. Com o desenvolvimento de trabalhos futuros pretende-se: tornar públicas as bases de imagens, melhorar a porcentagem de acerto e velocidade de processamento para todos os testes e melhorar a normalização, implementando a normalização de plano de fundo e também de iluminação. / With the evolution of information technology Facial recognition is becoming a common task. This artifact can be used in security, controlling access to restricted places and identifying persons, for example. Facial recognition is a complex task, and it's divided into some process, comprising: facial images capture, detection of regions of interest, facial normalization, feature extraction and recognition itself. Among these, the first three are treated in this work, which has as its main objective the automatic normalization of faces. For the capture of images and for the image normalization there are international standards that standardize the procedure for implementing these tasks and which were used in this work. In addition to following these rules, other standardizations have been developed to build a database of facial images in order to assist the process of face recognition. A new methodology for normalization of profile faces, based on the rules of frontal normalization. Some ways of normalization were implemented: frontal semiautomatic, lateral semiautomatic and automatic frontal. For the execution of frontal automatic normalization we need two points of interest, the two eyes, which makes it a necessary step to execute the detection regions of interest. In this work, we compared two similar methods for detecting. Where was first detected a region containing both eyes and then, within this region were detected each eye more accurately. For the two methodologies were used techniques of image processing and pattern recognition. The first method based on the Viola and Jones algorithm, the filter uses as Haar-like Features with the technique of pattern recognition Adaptive Boosting. Where the second algorithm equivalent techniques were Local Binary Pattern and Support Vector Machines, respectively. In the second algorithm was also used an optimization algorithm based on neighborhood search, the Variable Neighborhood Search. This studies resulted in a database with 3726 images, a frontal normalized database with 966 images and a database with face's profile normalized with 276 images. The eye detection resulted in better tests, about 99 % accuracy for the first method and 95 % for the second, and in all tests the first algorithm was the fastest. With the development of future work we have: make public the images database, improve the percentage of accuracy and processing speed for all tests and improve the normalization by implementing the normalization of the background and also lighting.

Page generated in 0.1394 seconds