• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 191
  • 42
  • 31
  • 20
  • 19
  • 14
  • 5
  • 3
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 393
  • 393
  • 292
  • 64
  • 46
  • 46
  • 45
  • 42
  • 40
  • 36
  • 36
  • 34
  • 34
  • 34
  • 34
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
211

Analyse de la qualité des signatures manuscrites en-ligne par la mesure d'entropie / Quality analysis of online signatures based on entropy measure

Houmani, Nesma 13 January 2011 (has links)
Cette thèse s'inscrit dans le contexte de la vérification d'identité par la signature manuscrite en-ligne. Notre travail concerne plus particulièrement la recherche de nouvelles mesures qui permettent de quantifier la qualité des signatures en-ligne et d'établir des critères automatiques de fiabilité des systèmes de vérification. Nous avons proposé trois mesures de qualité faisant intervenir le concept d’entropie. Nous avons proposé une mesure de qualité au niveau de chaque personne, appelée «Entropie personnelle», calculée sur un ensemble de signatures authentiques d’une personne. L’originalité de l’approche réside dans le fait que l’entropie de la signature est calculée en estimant les densités de probabilité localement, sur des portions, par le biais d’un Modèle de Markov Caché. Nous montrons que notre mesure englobe les critères habituels utilisés dans la littérature pour quantifier la qualité d’une signature, à savoir: la complexité, la variabilité et la lisibilité. Aussi, cette mesure permet de générer, par classification non supervisée, des catégories de personnes, à la fois en termes de variabilité de la signature et de complexité du tracé. En confrontant cette mesure aux performances de systèmes de vérification usuels sur chaque catégorie de personnes, nous avons trouvé que les performances se dégradent de manière significative (d’un facteur 2 au minimum) entre les personnes de la catégorie «haute Entropie» (signatures très variables et peu complexes) et celles de la catégorie «basse Entropie» (signatures les plus stables et les plus complexes). Nous avons ensuite proposé une mesure de qualité basée sur l’entropie relative (distance de Kullback-Leibler), dénommée «Entropie Relative Personnelle» permettant de quantifier la vulnérabilité d’une personne aux attaques (bonnes imitations). Il s’agit là d’un concept original, très peu étudié dans la littérature. La vulnérabilité associée à chaque personne est calculée comme étant la distance de Kullback-Leibler entre les distributions de probabilité locales estimées sur les signatures authentiques de la personne et celles estimées sur les imitations qui lui sont associées. Nous utilisons pour cela deux Modèles de Markov Cachés, l'un est appris sur les signatures authentiques de la personne et l'autre sur les imitations associées à cette personne. Plus la distance de Kullback-Leibler est faible, plus la personne est considérée comme vulnérable aux attaques. Cette mesure est plus appropriée à l’analyse des systèmes biométriques car elle englobe en plus des trois critères habituels de la littérature, la vulnérabilité aux imitations. Enfin, nous avons proposé une mesure de qualité pour les signatures imitées, ce qui est totalement nouveau dans la littérature. Cette mesure de qualité est une extension de l’Entropie Personnelle adaptée au contexte des imitations: nous avons exploité l’information statistique de la personne cible pour mesurer combien la signature imitée réalisée par un imposteur va coller à la fonction de densité de probabilité associée à la personne cible. Nous avons ainsi défini la mesure de qualité des imitations comme étant la dissimilarité existant entre l'entropie associée à la personne à imiter et celle associée à l'imitation. Elle permet lors de l’évaluation des systèmes de vérification de quantifier la qualité des imitations, et ainsi d’apporter une information vis-à-vis de la résistance des systèmes aux attaques. Nous avons aussi montré l’intérêt de notre mesure d’Entropie Personnelle pour améliorer les performances des systèmes de vérification dans des applications réelles. Nous avons montré que la mesure d’Entropie peut être utilisée pour : améliorer la procédure d’enregistrement, quantifier la dégradation de la qualité des signatures due au changement de plateforme, sélectionner les meilleures signatures de référence, identifier les signatures aberrantes, et quantifier la pertinence de certains paramètres pour diminuer la variabilité temporelle. / This thesis is focused on the quality assessment of online signatures and its application to online signature verification systems. Our work aims at introducing new quality measures quantifying the quality of online signatures and thus establishing automatic reliability criteria for verification systems. We proposed three quality measures involving the concept of entropy, widely used in Information Theory. We proposed a novel quality measure per person, called "Personal Entropy" calculated on a set of genuine signatures of such a person. The originality of the approach lies in the fact that the entropy of the genuine signature is computed locally, on portions of such a signature, based on local density estimation by a Hidden Markov Model. We show that our new measure includes the usual criteria of the literature, namely: signature complexity, signature variability and signature legibility. Moreover, this measure allows generating, by an unsupervised classification, 3 coherent writer categories in terms of signature variability and complexity. Confronting this measure to the performance of two widely used verification systems (HMM, DTW) on each Entropy-based category, we show that the performance degrade significantly (by a factor 2 at least) between persons of "high Entropy-based category", containing the most variable and the least complex signatures and those of "low Entropy-based category", containing the most stable and the most complex signatures. We then proposed a novel quality measure based on the concept of relative entropy (also called Kullback-Leibler distance), denoted « Personal Relative Entropy » for quantifying person's vulnerability to attacks (good forgeries). This is an original concept and few studies in the literature are dedicated to this issue. This new measure computes, for a given writer, the Kullback-Leibler distance between the local probability distributions of his/her genuine signatures and those of his/her skilled forgeries: the higher the distance, the better the writer is protected from attacks. We show that such a measure simultaneously incorporates in a single quantity the usual criteria proposed in the literature for writer categorization, namely signature complexity, signature variability, as our Personal Entropy, but also the vulnerability criterion to skilled forgeries. This measure is more appropriate to biometric systems, because it makes a good compromise between the resulting improvement of the FAR and the corresponding degradation of FRR. We also proposed a novel quality measure aiming at quantifying the quality of skilled forgeries, which is totally new in the literature. Such a measure is based on the extension of our former Personal Entropy measure to the framework of skilled forgeries: we exploit the statistical information of the target writer for measuring to what extent an impostor’s hand-draw sticks to the target probability density function. In this framework, the quality of a skilled forgery is quantified as the dissimilarity existing between the target writer’s own Personal Entropy and the entropy of the skilled forgery sample. Our experiments show that this measure allows an assessment of the quality of skilled forgeries of the main online signature databases available to the scientific community, and thus provides information about systems’ resistance to attacks. Finally, we also demonstrated the interest of using our Personal Entropy measure for improving performance of online signature verification systems in real applications. We show that Personal Entropy measure can be used to: improve the enrolment process, quantify the quality degradation of signatures due to the change of platforms, select the best reference signatures, identify the outlier signatures, and quantify the relevance of times functions parameters in the context of temporal variability.
212

SPEAKER AND GENDER IDENTIFICATION USING BIOACOUSTIC DATA SETS

Jose, Neenu 01 January 2018 (has links)
Acoustic analysis of animal vocalizations has been widely used to identify the presence of individual species, classify vocalizations, identify individuals, and determine gender. In this work automatic identification of speaker and gender of mice from ultrasonic vocalizations and speaker identification of meerkats from their Close calls is investigated. Feature extraction was implemented using Greenwood Function Cepstral Coefficients (GFCC), designed exclusively for extracting features from animal vocalizations. Mice ultrasonic vocalizations were analyzed using Gaussian Mixture Models (GMM) which yielded an accuracy of 78.3% for speaker identification and 93.2% for gender identification. Meerkat speaker identification with Close calls was implemented using Gaussian Mixture Models (GMM) and Hidden Markov Models (HMM), with an accuracy of 90.8% and 94.4% respectively. The results obtained shows these methods indicate the presence of gender and identity information in vocalizations and support the possibility of robust gender identification and individual identification using bioacoustic data sets.
213

The cost-effectiveness of low dose mammography - A decision-analytic approach

Forsblad, Sandra January 2010 (has links)
<p>With 7 000 new cases in Sweden each year, breast cancer represents 30 percent of all female malignancies and is therefore the most commonly diagnosed cancer among women. There are limitations as to what can be done to prevent the disease but with the use of mammography screening the chances of finding and treating the disease at an early stage are increasing. Unfortunately, mammography screening is associated with radiation, which is an established risk factor for developing breast cancer. However, the newest screening technologies come with a reduced dose which decreases the risk of developing breast cancer due to the radiation.</p><p> </p><p>The effects of this lower dose compared to that of traditional technologies have not yet been studied and the purpose of this paper is therefore to assess the cost-effectiveness of the use of this new technology, with a focus on the number of radiation-induced cancers. A cost-utility analysis was performed where three different mammography technologies (one analogue and two digital) were compared. The total costs and QALYs of breast cancer generated by the use of these three technologies were calculated with the use of a Markov decision-analytic model, where a cohort of hypothetical 40 year-old women was followed throughout life.</p><p> </p><p>The results of the analysis showed that with the new digital technology (the PC-DR), one in 14 100 screened women develops breast cancer due to radiation while with the traditional mammography systems (SFM and the CR) this number is one in 3 500 and 4 300 screened women, respectively. Consequently, the number of induced cancers is decreased with up to 75 percent with the use of the PC-DR. Assuming that only the radiation dose differs between the three units, the analysis resulted in an incremental effect of 0.000269 QALYs over a life-time for the PC-DR when compared to SFM(0.000210 QALYs compared to the CR). The PC-DR was also associated with a 33 SEK (26 SEK) lower cost. Thus, if the only difference can be found in radiation dose, the PC-DR is the dominating technology to use since it is both more effective and costs less. However, it is possible that the PC-DR is more expensive per screening occasion than the other technologies and if so, the PC-DR would no longer be less costly. The study found that the scope for the possibility of excessive pricing is very small and under these circumstances, the willingness to pay for a QALY has to be considered when deciding what technology to invest in.</p>
214

Models for Ordered Categorical Pharmacodynamic Data

Zingmark, Per-Henrik January 2005 (has links)
<p>In drug development clinical trials are designed to investigate whether a new treatment is safe and has the desired effect on the disease in the target patient population. Categorical endpoints, for example different ranking scales or grading of adverse events, are commonly used to measure effects in the trials. </p><p>Pharmacokinetic/Pharmacodynamic (PK/PD) models are used to describe the plasma concentration of a drug over time and its relationship to the effect studied. The models are utilized both in drug development and in discussions with drug regulating authorities. Methods for incorporation of ordered categorical data in PK/PD models were studied using a non-linear mixed effects modelling approach as implemented in the software NONMEM. The traditionally used proportional odds model was used for analysis of a 6-grade sedation scale in acute stroke patients and for analysis of a T-cell receptor expression in patients with Multiple Sclerosis, where the results also were compared with an analysis of the data on a continuous scale. Modifications of the proportional odds model were developed to enable analysis of a spontaneously reported side-effect and to analyze situations where the scale used is heterogeneous or where the drug affects the different scores in the scale in a non-proportional way. The new models were compared with the proportional odds model and were shown to give better predictive performances in the analyzed situations. </p><p>The results in this thesis show that categorical data obtained in clinical trials with different design and different categorical endpoints successfully can be incorporated in PK/PD models. The models developed can also be applied to analyses of other ordered categorical scales than those presented.</p>
215

Methodological Studies on Models and Methods for Mixed-Effects Categorical Data Analysis

Kjellsson, Maria C. January 2008 (has links)
Effects of drugs are in clinical trials often measured on categorical scales. These measurements are increasingly being analyzed using mixed-effects logistic regression. However, the experience with such analyzes is limited and only a few models are used. The aim of this thesis was to investigate the performance and improve the use of models and methods for mixed-effects categorical data analysis. The Laplacian method was shown to produce biased parameter estimates if (i) the data variability is large or (ii) the distribution of the responses is skewed. Two solutions are suggested; the Gaussian quadrature method and the back-step method. Two assumptions made with the proportional odds model have also been investigated. The assumption with proportional odds for all categories was shown to be unsuitable for analysis of data arising from a ranking scale of effects with several underlying causes. An alternative model, the differential odds model, was developed and shown to be an improvement, in regard to statistical significance as well as predictive performance, over the proportional odds model for such data. The appropriateness of the likelihood ratio test was investigated for an analysis where dependence between observations is ignored, i.e. performing the analysis using the proportional odds model. The type I error was found to be affected; thus assessing the actual critical value is prudent in order to verify the statistical significance level. An alternative approach is to use a Markov model, in which dependence between observations is incorporated. In the case of polychotomous data such model may involve considerable complexity and thus, a strategy for the reduction of the time-consuming model building with the Markov model and sleep data is presented. This thesis will hopefully contribute to a more confident use of models for categorical data analysis within the area of pharmacokinetic and pharmacodynamic modelling in the future.
216

Should Hepatitis B Screening Be Added to the United States Immigration Medical Exam? A Cost-utility Model

Beca, Jaclyn 14 December 2010 (has links)
Hepatitis B virus (HBV) infection is a global leading cause of death as a result of its role in the development of cirrhosis, hepatic decompensation, and hepatocellular carcinoma (HCC). In industrialized nations such as the United States, chronic hepatitis B infection represents a significant and disproportionate disease burden among the foreign-born population. A Markov cohort decision model was developed to determine the cost-effectiveness of HBV screening among new immigrants for the purposes of early detection and treatment, as compared to usual care. The incremental cost-effectiveness ratio for the screening strategy was $45,570 per quality adjusted life year saved. Given the potential for health gains for the immigrant cohort as well as the economic attractiveness of the intervention, some consideration should be given to the addition of a universal HBV screening program to U.S. immigration policy.
217

Should Hepatitis B Screening Be Added to the United States Immigration Medical Exam? A Cost-utility Model

Beca, Jaclyn 14 December 2010 (has links)
Hepatitis B virus (HBV) infection is a global leading cause of death as a result of its role in the development of cirrhosis, hepatic decompensation, and hepatocellular carcinoma (HCC). In industrialized nations such as the United States, chronic hepatitis B infection represents a significant and disproportionate disease burden among the foreign-born population. A Markov cohort decision model was developed to determine the cost-effectiveness of HBV screening among new immigrants for the purposes of early detection and treatment, as compared to usual care. The incremental cost-effectiveness ratio for the screening strategy was $45,570 per quality adjusted life year saved. Given the potential for health gains for the immigrant cohort as well as the economic attractiveness of the intervention, some consideration should be given to the addition of a universal HBV screening program to U.S. immigration policy.
218

Simple And Complex Behavior Learning Using Behavior Hidden Markov Model And Cobart

Seyhan, Seyit Sabri 01 January 2013 (has links) (PDF)
In this thesis, behavior learning and generation models are proposed for simple and complex behaviors of robots using unsupervised learning methods. Simple behaviors are modeled by simple-behavior learning model (SBLM) and complex behaviors are modeled by complex-behavior learning model (CBLM) which uses previously learned simple or complex behaviors. Both models have common phases named behavior categorization, behavior modeling, and behavior generation. Sensory data are categorized using correlation based adaptive resonance theory network that generates motion primitives corresponding to robot&#039 / s base abilities in the categorization phase. In the modeling phase, Behavior-HMM, a modified version of hidden Markov model, is used to model the relationships among the motion primitives in a finite state stochastic network. In addition, a motion generator which is an artificial neural network is trained for each motion primitive to learn essential robot motor commands. In the generation phase, desired task is presented as a target observation and the model generates corresponding motion primitive sequence. Then, these motion primitives are executed successively by the motion generators which are specifically trained for the corresponding motion primitives. The models are not proposed for one specific behavior, but are intended to be bases for all behaviors. CBLM enhances learning capabilities by integrating previously learned behaviors hierarchically. Hence, new behaviors can take advantage of already discovered behaviors. The proposed models are tested on a robot simulator and the experiments showed that simple and complex-behavior learning models can generate requested behaviors effectively.
219

The cost-effectiveness of low dose mammography - A decision-analytic approach

Forsblad, Sandra January 2010 (has links)
With 7 000 new cases in Sweden each year, breast cancer represents 30 percent of all female malignancies and is therefore the most commonly diagnosed cancer among women. There are limitations as to what can be done to prevent the disease but with the use of mammography screening the chances of finding and treating the disease at an early stage are increasing. Unfortunately, mammography screening is associated with radiation, which is an established risk factor for developing breast cancer. However, the newest screening technologies come with a reduced dose which decreases the risk of developing breast cancer due to the radiation.   The effects of this lower dose compared to that of traditional technologies have not yet been studied and the purpose of this paper is therefore to assess the cost-effectiveness of the use of this new technology, with a focus on the number of radiation-induced cancers. A cost-utility analysis was performed where three different mammography technologies (one analogue and two digital) were compared. The total costs and QALYs of breast cancer generated by the use of these three technologies were calculated with the use of a Markov decision-analytic model, where a cohort of hypothetical 40 year-old women was followed throughout life.   The results of the analysis showed that with the new digital technology (the PC-DR), one in 14 100 screened women develops breast cancer due to radiation while with the traditional mammography systems (SFM and the CR) this number is one in 3 500 and 4 300 screened women, respectively. Consequently, the number of induced cancers is decreased with up to 75 percent with the use of the PC-DR. Assuming that only the radiation dose differs between the three units, the analysis resulted in an incremental effect of 0.000269 QALYs over a life-time for the PC-DR when compared to SFM(0.000210 QALYs compared to the CR). The PC-DR was also associated with a 33 SEK (26 SEK) lower cost. Thus, if the only difference can be found in radiation dose, the PC-DR is the dominating technology to use since it is both more effective and costs less. However, it is possible that the PC-DR is more expensive per screening occasion than the other technologies and if so, the PC-DR would no longer be less costly. The study found that the scope for the possibility of excessive pricing is very small and under these circumstances, the willingness to pay for a QALY has to be considered when deciding what technology to invest in.
220

A Design of Recognition Rate Improving Strategy for Japanese Speech Recognition System

Lin, Cheng-Hung 24 August 2010 (has links)
This thesis investigates the recognition rate improvement strategies for a Japanese speech recognition system. Both training data development and consonant correction scheme are studied. For training data development, a database of 995 two-syllable Japanese words is established by phonetic balanced sieving. Furthermore, feature models for the 188 common Japanese mono-syllables are derived through mixed position training scheme to increase recognition rate. For consonant correction, a sub-syllable model is developed to enhance the consonant recognition accuracy, and hence further improve the overall correct rate for the whole Japanese phrases. Experimental results indicate that the average correct rate for Japanese phrase recognition system with 34 thousand phrases can be improved from 86.91% to 92.38%.

Page generated in 0.0792 seconds