• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 144
  • 128
  • 33
  • 18
  • 18
  • 11
  • 7
  • 4
  • 3
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 451
  • 130
  • 128
  • 71
  • 61
  • 56
  • 55
  • 47
  • 46
  • 45
  • 44
  • 43
  • 43
  • 42
  • 39
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Human Identification Based on Three-Dimensional Ear and Face Models

Cadavid, Steven 05 May 2011 (has links)
We propose three biometric systems for performing 1) Multi-modal Three-Dimensional (3D) ear + Two-Dimensional (2D) face recognition, 2) 3D face recognition, and 3) hybrid 3D ear recognition combining local and holistic features. For the 3D ear component of the multi-modal system, uncalibrated video sequences are utilized to recover the 3D ear structure of each subject within a database. For a given subject, a series of frames is extracted from a video sequence and the Region-of-Interest (ROI) in each frame is independently reconstructed in 3D using Shape from Shading (SFS). A fidelity measure is then employed to determine the model that most accurately represents the 3D structure of the subject’s ear. Shape matching between a probe and gallery ear model is performed using the Iterative Closest Point (ICP) algorithm. For the 2D face component, a set of facial landmarks is extracted from frontal facial images using the Active Shape Model (ASM) technique. Then, the responses of the facial images to a series of Gabor filters at the locations of the facial landmarks are calculated. The Gabor features are stored in the database as the face model for recognition. Match-score level fusion is employed to combine the match scores obtained from both the ear and face modalities. The aim of the proposed system is to demonstrate the superior performance that can be achieved by combining the 3D ear and 2D face modalities over either modality employed independently. For the 3D face recognition system, we employ an Adaboost algorithm to builda classifier based on geodesic distance features. Firstly, a generic face model is finely conformed to each face model contained within a 3D face dataset. Secondly, the geodesic distance between anatomical point pairs are computed across each conformed generic model using the Fast Marching Method. The Adaboost algorithm then generates a strong classifier based on a collection of geodesic distances that are most discriminative for face recognition. The identification and verification performances of three Adaboost algorithms, namely, the original Adaboost algorithm proposed by Freund and Schapire, and two variants – the Gentle and Modest Adaboost algorithms – are compared. For the hybrid 3D ear recognition system, we propose a method to combine local and holistic ear surface features in a computationally efficient manner. The system is comprised of four primary components, namely, 1) ear image segmentation, 2) local feature extraction and matching, 3) holistic feature extraction and matching, and 4) a fusion framework combining local and holistic features at the match score level. For the segmentation component, we employ our method proposed in [111], to localize a rectangular region containing the ear. For the local feature extraction and representation component, we extend the Histogram of Categorized Shapes (HCS) feature descriptor, proposed in [111], to an object-centered 3D shape descriptor, termed Surface Patch Histogram of Indexed Shapes (SPHIS), for surface patch representation and matching. For the holistic matching component, we introduce a voxelization scheme for holistic ear representation from which an efficient, element-wise comparison of gallery-probe model pairs can be made. The match scores obtained from both the local and holistic matching components are fused to generate the final match scores. Experimental results conducted on the University of Notre Dame (UND) collection J2 dataset demonstrate that theproposed approach outperforms state-of-the-art 3D ear biometric systems in both accuracy and efficiency.
22

Continuous user authentication using multi-modal biometrics

Saevanee, Hataichanok January 2014 (has links)
It is commonly acknowledged that mobile devices now form an integral part of an individual’s everyday life. The modern mobile handheld devices are capable to provide a wide range of services and applications over multiple networks. With the increasing capability and accessibility, they introduce additional demands in term of security. This thesis explores the need for authentication on mobile devices and proposes a novel mechanism to improve the current techniques. The research begins with an intensive review of mobile technologies and the current security challenges that mobile devices experience to illustrate the imperative of authentication on mobile devices. The research then highlights the existing authentication mechanism and a wide range of weakness. To this end, biometric approaches are identified as an appropriate solution an opportunity for security to be maintained beyond point-of-entry. Indeed, by utilising behaviour biometric techniques, the authentication mechanism can be performed in a continuous and transparent fashion. This research investigated three behavioural biometric techniques based on SMS texting activities and messages, looking to apply these techniques as a multi-modal biometric authentication method for mobile devices. The results showed that linguistic profiling; keystroke dynamics and behaviour profiling can be used to discriminate users with overall Equal Error Rates (EER) 12.8%, 20.8% and 9.2% respectively. By using a combination of biometrics, the results showed clearly that the classification performance is better than using single biometric technique achieving EER 3.3%. Based on these findings, a novel architecture of multi-modal biometric authentication on mobile devices is proposed. The framework is able to provide a robust, continuous and transparent authentication in standalone and server-client modes regardless of mobile hardware configuration. The framework is able to continuously maintain the security status of the devices. With a high level of security status, users are permitted to access sensitive services and data. On the other hand, with the low level of security, users are required to re-authenticate before accessing sensitive service or data.
23

Analysis of ordered categorical data : partial proportional odds and stratified models

Savaluny, Elly January 2000 (has links)
No description available.
24

Optimisation tools for enhancing signature verification

Ng, Su Gnee January 2000 (has links)
No description available.
25

Delineamentos D-ótimos para os modelos de Michaelis-Menten e de Hill /

Ferreira, Iuri Emmanuel de Paula. January 2010 (has links)
Orientador: Luzia Aparecida Trinca / Banca: Cláudia Pio Ferreira / Banca: Silvio Sandoval Zocchi / Banca: Miriam Harumi Tsunemi / Banca: Julia Maria Pavan Soler / Resumo: Os resultados de muitos experimentos em áreas da biologia, como a farmacologia, a bioquímica e a agronomia, geralmente são analisados por ajustes de modelos não-lineares através dos quais pretende-se explicar a resposta através dos fatores pré-especificados no experimento. As estimações dos parâmetros ou das funções de interesse podem ser imprecisas se os níveis dos fatores não forem adequadamente escolhidos, impossibilitando ao pesquisador a obtenção da informação desejada sobre o objeto de estudo. A construção de um delineamento ótimo, que maximize a informação sobre algum aspecto de interesse, é crucial para o sucesso da prática experimental. O objetivo deste trabalho foi a obtenção de delineamentos D-ótimos exatos para modelos não-lineares utilizados para estudar cinética enzimática e transporte de minerais no organismo, como o de Michaelis-Menten e o de RiU. Para este fim, duas abordagens foram consideradas, a saber, a de delineamentos localmente ótimos e a pseudo-Bayesiana. Com o auxílio dos algoritmos genético e exchange foi possível obter delineamentos D-ótimos exatos para o modelo de Michaelis-Menten, para o modelo de RiU e para ambos, considerando-se valores diferentes e distribuições com diversos coeficientes de variação como informação a priori / Abstract: The results of many experiments in biological fields, as pharmacology, biochemistry and agriculture, usually are analyzed by fitting nonlinear models, which are supposed to describe well the resp'onse to the pre-specified factors in the experiment. The estimates of the parameters or of their functions of interest could be imprecise if the factor levels are not adequately chosen. The construction of an optimum design, which maximizes the information about some aspect of interest, is crucial for the success of the experimental practice. The aim of this work was constructing exact D-optimal designs for nonlinear models usually used in studies of enzyme kinetics and mineral transport in organisms, such as the Michaelis-Menten and RiU models. Two approaches were considered, the locally optimal and pseudo- Bayesian designs. Genetic and Exchange algorithms were used for getting exact designs aiming at the Michaelis-Menten model, aiming at the RiU model, each one separately, and aiming at both models when considering a composite criterion. Different values and probability distributions with several variation coefficients were considered as prior information / Mestre
26

O método de pontos interiores no planejamento da radioterapia /

Martins, Andréa Camila dos Santos. January 2011 (has links)
Orientador: Helenice de Oliveira Florentino Silva / Banca: Andréa Carla Gonçalves Vianna / Banca: Antônio Roberto Balbo / Resumo: Um tratamento do câncer por radioterapia tem como objetivo a eliminação das células do tumor e preservação das células saudáveis, obtendo assim uma melhor homo-geneização da dose administrada e menor possibilidade de complicações clínicas durante o tratamento. O sucesso do tratamento depende de um bom planejamento. Para um planejamento ótimo, técnicas matemáticas estão sendo utilizadas com o objetivo de maximizar a radiação no tumor e minimizar a radiação nas regiões vizinhas, com isto modelos de programação linear têm sido ótimas ferramentas para auxiliar a construção dos planos de tratamento por radioterapia. Assim, este trabalho visa: estudar os principais conceitos envolvidos no planejamento do tratamento do câncer por radioterapia; estudar modelos de programação linear (PL) aplicados ao planejamento ótimo; fazer um amplo estudo sobre a técnica de pontos interiores para PL e apresentar uma aplicação desta técnica para resolução de um problema de planejamento ótimo para o tratamento do câncer por radioterapia / Abstract: A cancer treatment by radiotherapy aims to eliminate tumor cells and preservation of healthy cells, thus getting a better homogenization of the administered dose and fewer chances of complications during treatment. Treatment success depends on good planning. For an optimal planning, mathematical techniques are being used in order to maximize radiation at tumor and minimize radiation in the surrounding regions, thus linear programming models has been great tools to assist the construction of treatment plans for radiation therapy. Thus, this work aims: studying the key concepts involved in planning the treatment of cancer by radiotherapy; study the models the linear program- ming (PL) applied to optimal planning; make a broad study on the technique of interior point for PL and present an enforcement of this technique for solving a problem of optimal planning for cancer treatment by radiotherapy / Mestre
27

Delineamentos ótimos para experimentos farmacocinéticos

Santos, Maurício Bedim dos [UNESP] 05 March 2010 (has links) (PDF)
Made available in DSpace on 2014-06-11T19:23:03Z (GMT). No. of bitstreams: 0 Previous issue date: 2010-03-05Bitstream added on 2014-06-13T20:49:53Z : No. of bitstreams: 1 santos_mb_me_botib.pdf: 406376 bytes, checksum: e3bf80284bc0601811da0e5c76ccb445 (MD5) / Os ensaios na area de farmacologia cl nica envolvem coletas sangu neas e medidas da informação (concentração de um fármaco) em horários pré estabelecidos. A prática atual, na maioria das vezes, estabelece os tempos de coleta arbitrariamente, o que pode resultar em dados pouco informativos para ajustar um modelo. Uma metodologia para resolver este tipo de problema e a construcão de delineamentos otimos. Em geral, os modelos envolvem equações não lineares. Sendo que um modelo popular e o modelo monocompartimental (de primeira ordem de absorção e eliminação) que possui três parâmetros. O problema principal de delineamento para modelos não lineares e que a matriz de variâncias e covariâncias dos estimadores dos parâmetros depende dos valores destes, dificultando o planejamento. Outra dificuldade é que várias coletas são realizadas num mesmo sujeito e portanto as respostas são correlacionadas. Assim, a matriz de variâncias e covariâncias depende também das correlações que podem ser incorporadas considerando-se um modelo não linear com efeitos aleatórios. Esse trabalho visa o estudo da teoria de delineamentos... / Trials in clinical pharmacology involves colleting blood samples and measuring the concentration of a drug at pre-especi ed moments. Current practice, usually xes the point times arbitrarily, which can result in uninformative data to t the aimed model. A methodology for solving such problems is the construction of optimum designs. In general, the models involve nonlinear equations. A popular model is the one-compartment model ( rst-order absorption and elimination). This model has three parameters. The main problem of design for nonlinear models is that the matrix of variances and covariances of the estimators of the parameters depends on the values of these, making the planning more di cult. Another di culty is that several samples are performed in the same subject and therefore the responses are correlated. The matrix of variances and covariances also depends on the correlations. The correlations can be incorporated by considering a nonlinear model with random e ects. This work aims to study the theory of optimal designs and the construction of algorithm to optimize designs under the nonlinear model with xed e ects and random e ects. The methodology can produce local optimum designs at some prior value of the parameters or try to reach global optimum through the incorporation of probability distributions of the parameters which are taken into account when calculating the value of the criterion used such designs are called Bayesians. Based on the results of an experiment from the literature D and Aw local and Bayesian optimum designs were obtained. To compare designs their e ciencies were calculated
28

Uma abordagem probabilística do número de reprodução básica em modelos epidemiológicos com aplicação na ferrugem do eucalipto

Kodaira, Juliana Yukari [UNESP] 25 February 2011 (has links) (PDF)
Made available in DSpace on 2014-06-11T19:23:03Z (GMT). No. of bitstreams: 0 Previous issue date: 2011-02-25Bitstream added on 2014-06-13T18:09:08Z : No. of bitstreams: 1 kodaira_jy_me_botib.pdf: 813284 bytes, checksum: 8a31684d77b5886e0104b40a4777583b (MD5) / Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES) / Na epidemiologia matemática, uma importante medida derivada do modelo determinístico associado à dinâmica de transmissão de uma doença infecciosa é o número esperado de infecções secundárias produzidas por um caso indexado em uma população completamente suscetível, conhecido como número de reprodução básica R0. Utilizando simulações de Monte Carlo, estudamos o efeito da incerteza sobre R0 em modelos compartimentos de transmissão de doenças, associando variáveis aleatórias uniformemente distribuídas a cada parâmetro constituinte de R0. Esta pertubação sobre os parâmetros corresponde à imprecisão intrínseca de seus valores na natureza. Neste trabalho também consideramos diferentes intervalos para as taxas de transmissão de doença com o intuito de avaliar seus efeitos dinâmicos. Aplicamos este método à modelagem da ferrugem do eucalipto, que é uma doença muito comum e severa em plantações de Eucalyptus spp. e outras mirtáceas, transmitida pelo fungo Puccinia psidii Winter. Hoje o eucalipto possui importância significativa tanto no mercado nacional quanto internacional e, portanto, iniciativas que auxiliem seu manejo integrado de doenças são imprescindíveis. Nossos resultados mostram que o método utilizado é eficiente, pois representa a influência das taxas de transmissão de doença no padrão da distribuição de probabilidade aproximada de R0, permitindo a obtenção das funções empíricas percentílicas complementares para os modelos considerados / In mathematical epidemiology, an important measure derived from the deterministic model associated with the transmission dynamics of an infectious disease is the expected number of secondary infections produced by an indexed case in a completely susceptible population, known as the basic reproduction number R0. Using Monte Carlo simulations, we studied the effect of the uncertainty on R0 in compartmental disease transmission models, associating random variables uniformly distributed to each constituent parameter of R0. This perturbation on the parameters correspond to the intrinsic imprecision of their values in nature. In this work we also consider different ranges for the disease transmission rates in order to evaluate their dynamical effects. We apply this method to the eucalyptus rust, which is a very common and severe disease in plantations of Eucalyptus spp. and others Myrtaceae, transmitted by the fungus Puccinia psidii Winter. Today, eucalyptus has significant importance in both national and international market. Therefore, initiatives to help its integrated disease management are essential. Our results show that the method is efficient, since it represents the influence of the disease transmission rates in the approximated probability distribution pattern of R0, allowing us to obtain the empirical percentile complementary functions for the considered models
29

Liveness assurance in biometric systems

Du Preez, Johan Frederik 13 May 2008 (has links)
The need for a more secure cyber future is apparent in the information age that we live in. Information is fast becoming, and already is, one of the biggest assets in all domains of life. Access to information and specifically personal information must be regulated and secured in a trusted way. The use of passwords and tokens (example: bank card) that’s currently the most popular and well known mechanism for electronic identification can only identify the password or token but NOT the physical user using the password or token for identification. Biometrics addresses the above issue by being part of the physical user. For example: your fingerprint, retina or iris. Current biometric technologies provide an enabling medium to help with more accurate identification and verification. Thereby protecting and securing electronic information…BUT: One of the biggest problem areas surrounding biometrics is the fact that most biometric tokens (fingerprints, hand geometry and the human eye) can be used in some cases to identify the owner of the biometric token even after death as if the owner was still alive. The problem becomes apparent in the case of a person that passed away and the possibility of using the biometric tokens of the deceased to obtain access to his/her bank account. Therefore the importance of effective liveness testing is highlighted. Current liveness testing technologies can not be trusted in a way that would be necessary to provide the trust needed in the example of access to a personal bank account at an ATM (automatic teller machine). This dissertation reports on the initial stages of a research project that addresses the above problem by proposing the use of biometric tokens that doesn’t exist if the owner is not alive, thus the dissertation coins the new term – Inherent Liveness Biometrics. The way the human heart beats as a biometric token to identify or verify a person, might solve the issue of liveness testing, because “The way the human heart beats” might prove to be a natural biometric token that is only valid for a living person, thus an inherent liveness biometric. / Prof. S.H. Von Solms
30

Primary/Soft Biometrics: Performance Evaluation and Novel Real-Time Classifiers

Alorf, Abdulaziz Abdullah 19 February 2020 (has links)
The relevance of faces in our daily lives is indisputable. We learn to recognize faces as newborns, and faces play a major role in interpersonal communication. The spectrum of computer vision research about face analysis includes, but is not limited to, face detection and facial attribute classification, which are the focus of this dissertation. The face is a primary biometric because by itself revels the subject's identity, while facial attributes (such as hair color and eye state) are soft biometrics because by themselves they do not reveal the subject's identity. In this dissertation, we proposed a real-time model for classifying 40 facial attributes, which preprocesses faces and then extracts 7 types of classical and deep features. These features were fused together to train 3 different classifiers. Our proposed model yielded 91.93% on the average accuracy outperforming 7 state-of-the-art models. We also developed a real-time model for classifying the states of human eyes and mouth (open/closed), and the presence/absence of eyeglasses in the wild. Our method begins by preprocessing a face by cropping the regions of interest (ROIs), and then describing them using RootSIFT features. These features were used to train a nonlinear support vector machine for each attribute. Our eye-state classifier achieved the top performance, while our mouth-state and glasses classifiers were tied as the top performers with deep learning classifiers. We also introduced a new facial attribute related to Middle Eastern headwear (called igal) along with its detector. Our proposed idea was to detect the igal using a linear multiscale SVM classifier with a HOG descriptor. Thereafter, false positives were discarded using dense SIFT filtering, bag-of-visual-words decomposition, and nonlinear SVM classification. Due to the similarity in real-life applications, we compared the igal detector with state-of-the-art face detectors, where the igal detector significantly outperformed the face detectors with the lowest false positives. We also fused the igal detector with a face detector to improve the detection performance. Face detection is the first process in any facial attribute classification pipeline. As a result, we reported a novel study that evaluates the robustness of current face detectors based on: (1) diffraction blur, (2) image scale, and (3) the IoU classification threshold. This study would enable users to pick the robust face detector for their intended applications. / Doctor of Philosophy / The relevance of faces in our daily lives is indisputable. We learn to recognize faces as newborns, and faces play a major role in interpersonal communication. Faces probably represent the most accurate biometric trait in our daily interactions. Thereby, it is not singular that so much effort from computer vision researchers have been invested in the analysis of faces. The automatic detection and analysis of faces within images has therefore received much attention in recent years. The spectrum of computer vision research about face analysis includes, but is not limited to, face detection and facial attribute classification, which are the focus of this dissertation. The face is a primary biometric because by itself revels the subject's identity, while facial attributes (such as hair color and eye state) are soft biometrics because by themselves they do not reveal the subject's identity. Soft biometrics have many uses in the field of biometrics such as (1) they can be utilized in a fusion framework to strengthen the performance of a primary biometric system. For example, fusing a face with voice accent information can boost the performance of the face recognition. (2) They also can be used to create qualitative descriptions about a person, such as being an "old bald male wearing a necktie and eyeglasses." Face detection and facial attribute classification are not easy problems because of many factors, such as image orientation, pose variation, clutter, facial expressions, occlusion, and illumination, among others. In this dissertation, we introduced novel techniques to classify more than 40 facial attributes in real-time. Our techniques followed the general facial attribute classification pipeline, which begins by detecting a face and ends by classifying facial attributes. We also introduced a new facial attribute related to Middle Eastern headwear along with its detector. The new facial attribute were fused with a face detector to improve the detection performance. In addition, we proposed a new method to evaluate the robustness of face detection, which is the first process in the facial attribute classification pipeline. Detecting the states of human facial attributes in real time is highly desired by many applications. For example, the real-time detection of a driver's eye state (open/closed) can prevent severe accidents. These systems are usually called driver drowsiness detection systems. For classifying 40 facial attributes, we proposed a real-time model that preprocesses faces by localizing facial landmarks to normalize faces, and then crop them based on the intended attribute. The face was cropped only if the intended attribute is inside the face region. After that, 7 types of classical and deep features were extracted from the preprocessed faces. Lastly, these 7 types of feature sets were fused together to train three different classifiers. Our proposed model yielded 91.93% on the average accuracy outperforming 7 state-of-the-art models. It also achieved state-of-the-art performance in classifying 14 out of 40 attributes. We also developed a real-time model that classifies the states of three human facial attributes: (1) eyes (open/closed), (2) mouth (open/closed), and (3) eyeglasses (present/absent). Our proposed method consisted of six main steps: (1) In the beginning, we detected the human face. (2) Then we extracted the facial landmarks. (3) Thereafter, we normalized the face, based on the eye location, to the full frontal view. (4) We then extracted the regions of interest (i.e., the regions of the mouth, left eye, right eye, and eyeglasses). (5) We extracted low-level features from each region and then described them. (6) Finally, we learned a binary classifier for each attribute to classify it using the extracted features. Our developed model achieved 30 FPS with a CPU-only implementation, and our eye-state classifier achieved the top performance, while our mouth-state and glasses classifiers were tied as the top performers with deep learning classifiers. We also introduced a new facial attribute related to Middle Eastern headwear along with its detector. After that, we fused it with a face detector to improve the detection performance. The traditional Middle Eastern headwear that men usually wear consists of two parts: (1) the shemagh or keffiyeh, which is a scarf that covers the head and usually has checkered and pure white patterns, and (2) the igal, which is a band or cord worn on top of the shemagh to hold it in place. The shemagh causes many unwanted effects on the face; for example, it usually occludes some parts of the face and adds dark shadows, especially near the eyes. These effects substantially degrade the performance of face detection. To improve the detection of people who wear the traditional Middle Eastern headwear, we developed a model that can be used as a head detector or combined with current face detectors to improve their performance. Our igal detector consists of two main steps: (1) learning a binary classifier to detect the igal and (2) refining the classier by removing false positives. Due to the similarity in real-life applications, we compared the igal detector with state-of-the-art face detectors, where the igal detector significantly outperformed the face detectors with the lowest false positives. We also fused the igal detector with a face detector to improve the detection performance. Face detection is the first process in any facial attribute classification pipeline. As a result, we reported a novel study that evaluates the robustness of current face detectors based on: (1) diffraction blur, (2) image scale, and (3) the IoU classification threshold. This study would enable users to pick the robust face detector for their intended applications. Biometric systems that use face detection suffer from huge performance fluctuation. For example, users of biometric surveillance systems that utilize face detection sometimes notice that state-of-the-art face detectors do not show good performance compared with outdated detectors. Although state-of-the-art face detectors are designed to work in the wild (i.e., no need to retrain, revalidate, and retest), they still heavily depend on the datasets they originally trained on. This condition in turn leads to variation in the detectors' performance when they are applied on a different dataset or environment. To overcome this problem, we developed a novel optics-based blur simulator that automatically introduces the diffraction blur at different image scales/magnifications. Then we evaluated different face detectors on the output images using different IoU thresholds. Users, in the beginning, choose their own values for these three settings and then run our model to produce the efficient face detector under the selected settings. That means our proposed model would enable users of biometric systems to pick the efficient face detector based on their system setup. Our results showed that sometimes outdated face detectors outperform state-of-the-art ones under certain settings and vice versa.

Page generated in 0.0423 seconds