Spelling suggestions: "subject:"false positive"" "subject:"valse positive""
41 |
Attitudes Toward Holistic and Mechanical Judgment in Employee Selection: Role of Error Rate and False Positive and False Negative ErrorYankelevich, Maya 23 April 2010 (has links)
No description available.
|
42 |
Analysis of Time-Based Approach for Detecting Anomalous Network TrafficKhasgiwala, Jitesh 19 April 2005 (has links)
No description available.
|
43 |
Classification de menaces d’erreurs par analyse statique, simplification syntaxique et test structurel de programmes / Classification of errors threats by static analysis, program sclicing and structural testing of programsChebaro, Omar 13 December 2011 (has links)
La validation des logiciels est une partie cruciale dans le cycle de leur développement. Deux techniques de vérification et de validation se sont démarquées au cours de ces dernières années : l’analyse statique et l’analyse dynamique. Les points forts et faibles des deux techniques sont complémentaires. Nous présentons dans cette thèse une combinaison originale de ces deux techniques. Dans cette combinaison, l’analyse statique signale les instructions risquant de provoquer des erreurs à l’exécution, par des alarmes dont certaines peuvent être de fausses alarmes, puis l’analyse dynamique (génération de tests) est utilisée pour confirmer ou rejeter ces alarmes. L’objectif de cette thèse est de rendre la recherche d’erreurs automatique, plus précise, et plus efficace en temps. Appliquée à des programmes de grande taille, la génération de tests, peut manquer de temps ou d’espace mémoire avant de confirmer certaines alarmes comme de vraies erreurs ou conclure qu’aucun chemin d’exécution ne peut atteindre l’état d’erreur de certaines alarmes et donc rejeter ces alarmes. Pour surmonter ce problème, nous proposons de réduire la taille du code source par le slicing avant de lancer la génération de tests. Le slicing transforme un programme en un autre programme plus simple, appelé slice, qui est équivalent au programme initial par rapport à certains critères. Quatre utilisations du slicing sont étudiées. La première utilisation est nommée all. Elle consiste à appliquer le slicing une seule fois, le critère de simplification étant l’ensemble de toutes les alarmes du programme qui ont été détectées par l’analyse statique. L’inconvénient de cette utilisation est que la génération de tests peut manquer de temps ou d’espace et les alarmes les plus faciles à classer sont pénalisées par l’analyse d’autres alarmes plus complexes. Dans la deuxième utilisation, nommée each, le slicing est effectué séparément par rapport à chaque alarme. Cependant, la génération de tests est exécutée pour chaque programme et il y a un risque de redondance d’analyse si des alarmes sont incluses dans d’autres slices. Pour pallier ces inconvénients, nous avons étudié les dépendances entre les alarmes et nous avons introduit deux utilisations avancées du slicing, nommées min et smart, qui exploitent ces dépendances. Dans l’utilisation min, le slicing est effectué par rapport à un ensemble minimal de sous-ensembles d’alarmes. Ces sous-ensembles sont choisis en fonction de dépendances entre les alarmes et l’union de ces sous-ensembles couvre l’ensemble de toutes les alarmes. Avec cette utilisation, on a moins de slices qu’avec each, et des slices plus simples qu’avec all. Cependant, l’analyse dynamique de certaines slices peut manquer de temps ou d’espace avant de classer certaines alarmes, tandis que l’analyse dynamique d’une slice éventuellement plus simple permettrait de les classer. L’utilisation smart consiste à appliquer l’utilisation précédente itérativement en réduisant la taille des sous-ensembles quand c’est nécessaire. Lorsqu’une alarme ne peut pas être classée par l’analyse dynamique d’une slice, des slices plus simples sont calculées. Nous prouvons la correction de la méthode proposée. Ces travaux sont implantés dans sante, notre outil qui relie l’outil de génération de tests PathCrawler et la plate-forme d’analyse statique Frama-C. Des expérimentations ont montré, d’une part, que notre combinaison est plus performante que chaque technique utilisée indépendamment et, d’autre part, que la vérification devient plus rapide avec l’utilisation du slicing. De plus, la simplification du programme par le slicing rend les erreurs détectées et les alarmes restantes plus faciles à analyser / Software validation remains a crucial part in software development process. Two major techniques have improved in recent years, dynamic and static analysis. They have complementary strengths and weaknesses. We present in this thesis a new original combination of these methods to make the research of runtime errors more accurate, automatic and reduce the number of false alarms. We prove as well the correction of the method. In this combination, static analysis reports alarms of runtime errors some of which may be false alarms, and test generation is used to confirm or reject these alarms. When applied on large programs, test generation may lack time or space before confirming out certain alarms as real bugs or finding that some alarms are unreachable. To overcome this problem, we propose to reduce the source code by program slicing before running test generation. Program slicing transforms a program into another simpler program, which is equivalent to the original program with respect to certain criterion. Four usages of program slicing were studied. The first usage is called all. It applies the slicing only once, the simplification criterion is the set of all alarms in the program. The disadvantage of this usage is that test generation may lack time or space and alarms that are easier to classify are penalized by the analysis of other more complex alarms. In the second usage, called each, program slicing is performed with respect to each alarm separately. However, test generation is executed for each sliced program and there is a risk of redundancy if some alarms are included in many slices. To overcome these drawbacks, we studied dependencies between alarms on which we base to introduce two advanced usages of program slicing : min and smart. In the min usage, the slicing is performed with respect to subsets of alarms. These subsets are selected based on dependencies between alarms and the union of these subsets cover the whole set of alarms. With this usage, we analyze less slices than with each, and simpler slices than with all. However, the dynamic analysis of some slices may lack time or space before classifying some alarms, while the dynamic analysis of a simpler slice could possibly classify some. Usage smart applies previous usage iteratively by reducing the size of the subsets when necessary. When an alarm cannot be classified by the dynamic analysis of a slice, simpler slices are calculated. These works are implemented in sante, our tool that combines the test generation tool PathCrawler and the platform of static analysis Frama-C. Experiments have shown, firstly, that our combination is more effective than each technique used separately and, secondly, that the verification is faster after reducing the code with program slicing. Simplifying the program by program slicing also makes the detected errors and the remaining alarms easier to analyze
|
44 |
Development of a novel electron-transfer secondary reaction matrix, characterization of the site–specificity of novel bilin-lyase, and Fundulus grandis protein expression investigation using mass spectrometryBoutaghou, Mohamed N 17 December 2011 (has links)
Reported in this dissertation are the results of investigations performed at the New Orleans Center for Mass Spectrometry at the University of New Orleans. The projects that are detailed in the coming pages take on a variety of subjects, but a common thread is that each employs matrix-assisted laser desorption/ionization (MALDI) mass spectrometry to solve a problem. Fundamental aspects of MALDI in-plume ionization are implicated in the introduction of a newly developed electron-transfer secondary ionization matrix. The remaining projects are related to the ever expanding field of proteomics. Mass spectrometry was used to investigate the site specificity of a newly developed bilin-lyase enzyme, a new approach was developed to distinguish between A-ring and D-ring attachment of bilins, and F. grandis protein expression pattern was investigated in several tissues. All obtained results were acquired using a MALDI TOF/TOF mass spectrometer. The sensitivity, mass accuracy, mass resolution and the ability to perform collision induced decomposition (CID) experiments were all valuable features that served to raise the quality of data, and thereby improved the detail of inferences to be drawn for the different projects.
|
45 |
Identifying exoplanets and unmasking false positives with NGTSGünther, Maximilian Norbert January 2018 (has links)
In my PhD, I advanced the scientific exploration of the Next Generation Transit Survey (NGTS), a ground-based wide-field survey operating at ESO’s Paranal Observatory in Chile since 2016. My original contribution to knowledge is the development of novel methods to 1) estimate NGTS’ yield of planets and false positives; 2) disentangle planets from false positives; and 3) accurately characterise planets. If an exoplanet passes (transits) in front of its host star, we can measure a periodic decrease in brightness. The study of transiting exoplanets gives insight into their size, formation, bulk composition and atmospheric properties. Transit surveys are limited by their ability to identify false positives, which can mimic planets and out-number them by a hundredfold. First, I designed a novel yield simulator to optimise NGTS’ observing strategy and identification of false positives (published in Günther et al., 2017a). This showed that NGTS’ prime targets, Neptune- and Earth-sized signals, are frequently mimicked by blended eclipsing binaries, allowing me to quantify and prepare strategies for candidate vetting and follow-up. Second, I developed a centroiding algorithm for NGTS, achieving a precision of 0.25 milli-pixel in a CCD image (published in Günther et al., 2017b). With this, one can measure a shift of light during an eclipse, readily identifying unresolved blended objects. Third, I innovated a joint Bayesian fitting framework for photometry, centroids, and radial velocity cross-correlation function profiles. This allows to disentangle which object (target or blend) is causing the signal and to characterise the system. My method has already unmasked numerous false positives. Most importantly, I confirmed that a signal which was almost erroneously rejected, is in fact an exoplanet (published in Günther et al., 2018). The presented achievements minimise the contamination with blended false positives in NGTS candidates by 80%, and show a new approach for unmasking hidden exoplanets. This research enhanced the success of NGTS, and can provide guidance for future missions.
|
46 |
Robust estimation for spatial models and the skill test for disease diagnosisLin, Shu-Chuan 25 August 2008 (has links)
This thesis focuses on (1) the statistical methodologies for the estimation of spatial data with outliers and (2) classification accuracy of disease diagnosis.
Chapter I, Robust Estimation for Spatial Markov Random Field Models:
Markov Random Field (MRF) models are useful in analyzing spatial lattice data
collected from semiconductor device fabrication and printed circuit board manufacturing processes or agricultural field trials. When outliers are present in the data, classical parameter estimation techniques (e.g., least squares) can be inefficient and potentially mislead the analyst. This chapter extends the MRF model to accommodate outliers and proposes robust parameter estimation methods such as the robust M- and RA-estimates. Asymptotic distributions of the estimates with differentiable and non-differentiable robustifying function are derived. Extensive simulation studies explore robustness properties of the proposed methods in situations with various amounts of outliers in different patterns. Also provided are studies of analysis of grid data with and without the edge information. Three data sets taken from the literature illustrate advantages of the methods.
Chapter II, Extending the Skill Test for Disease Diagnosis:
For diagnostic tests, we present an extension to the skill plot introduced by Mozer
and Briggs (2003). The method is motivated by diagnostic measures for osteoporosis in a study. By restricting the area under the ROC curve (AUC) according to the skill statistic, we have an improved diagnostic test for practical applications by considering the misclassification costs. We also construct relationships, using the Koziol-Green model and mean-shift model, between the diseased group and the healthy group for improving the skill statistic. Asymptotic properties of the skill statistic are provided. Simulation studies compare the theoretical results and the estimates under various disease rates and misclassification costs. We apply the proposed method in classification of osteoporosis data.
|
47 |
Implementace a rozšíření frameworku pro testování technické dokumentace / Implementation and Extension of the Technical Documentation Testing FrameworkMacko, Peter January 2020 (has links)
Práca sa zaoberá automatizáciou testovania technickej dokumentácie napísanej v značkovacom jazyku AsciiDoc pomocou open-source frameworku testovania technickej dokumentácie Emender implementovaného na CI/CD platforme. Framework bol rozšírený o webovú aplikáciu emenderwebservice s REST API, ktorá poskytuje užívateľské grafické rozhranie s výsledkami testov a mechanizmom na odrieknutie falošne pozitívnych výsledkov testov. Webová aplikácia bola vytvorená pomocou WSGI frameworku na tvorbu webových aplikácií Flask s databázou ktorá umožňuje agregáciu výsledkov testov a ich unikátnu identifikáciu. Aplikácia uľahčuje prístup ku výsledkom testov vygenerovaných frameworkom Emender v CI/CD systémoch a poskytuje technical writer-om ucelené užívateľské prostredie.
|
48 |
A performance measurement of a Speaker Verification system based on a variance in data collection for Gaussian Mixture Model and Universal Background ModelBekli, Zeid, Ouda, William January 2018 (has links)
Voice recognition has become a more focused and researched field in the last century,and new techniques to identify speech has been introduced. A part of voice recognition isspeaker verification which is divided into Front-end and Back-end. The first componentis the front-end or feature extraction where techniques such as Mel-Frequency CepstrumCoefficients (MFCC) is used to extract the speaker specific features of a speech signal,MFCC is mostly used because it is based on the known variations of the humans ear’scritical frequency bandwidth. The second component is the back-end and handles thespeaker modeling. The back-end is based on the Gaussian Mixture Model (GMM) andGaussian Mixture Model-Universal Background Model (GMM-UBM) methods forenrollment and verification of the specific speaker. In addition, normalization techniquessuch as Cepstral Means Subtraction (CMS) and feature warping is also used forrobustness against noise and distortion. In this paper, we are going to build a speakerverification system and experiment with a variance in the amount of training data for thetrue speaker model, and to evaluate the system performance. And further investigate thearea of security in a speaker verification system then two methods are compared (GMMand GMM-UBM) to experiment on which is more secure depending on the amount oftraining data available.This research will therefore give a contribution to how much data is really necessary fora secure system where the False Positive is as close to zero as possible, how will theamount of training data affect the False Negative (FN), and how does this differ betweenGMM and GMM-UBM.The result shows that an increase in speaker specific training data will increase theperformance of the system. However, too much training data has been proven to beunnecessary because the performance of the system will eventually reach its highest point and in this case it was around 48 min of data, and the results also show that the GMMUBM model containing 48- to 60 minutes outperformed the GMM models.
|
49 |
Characteristics Associated with Neonatal Carnitine Levels: A Systematic Review & Clinical Database AnalysisSutherland, Sarah C. 28 January 2013 (has links)
Newborn screening programs measure analyte levels in neonatal blood spots to identify individuals at high risk of disease. Carnitine and acylcarnitine levels are primary markers used in the detection of fatty acid oxidation disorders. These analytes may be influenced by certain pre/perinatal or newborn screening related factors. The primary objective of this study was to explore the association between these characteristics and levels of blood carnitines and acylcarnitines in the newborn population. The study was composed of two parts: a systematic review and a clinical database analysis of existing newborn screening data. The systematic review results suggested considerable variability across studies in the presence and directionality of associations between analyte levels and birth weight, gestational age, age at time of blood spot collection, type of sample, and storage time. Sex was not significantly associated with carnitine or acylcarnitine levels in neonatal blood. We identified a need to more fully investigate a potential interaction between gestational age and birth weight in regard to analyte levels. The secondary data analyses indicated a statistically significant relationship between analyte levels and all perinatal / infant and newborn screening related factors of interest, but effect sizes were generally small. The interaction between gestational age and birth weight was significant in all models; when further explored through graphical analysis with conditional means, extremely premature neonates stood out as having distinct analyte patterns in relation to birth weight. Variation in the ratio of total acylcarnitine to free carnitine was better accounted for by the perinatal and newborn factors than was variation in any individual carnitine or acylcarnitine, indicating that proportions of carnitine and acylcarnitines may be more important in understanding an individual’s metabolic functioning than individual analyte levels. A low proportion of variation was explained in all multivariate models, supporting the use of universal algorithms in newborn screening and suggesting the need for further large scale empirical research targeted at previously unaccounted for perinatal factors such as birth stress.
|
50 |
Characteristics Associated with Neonatal Carnitine Levels: A Systematic Review & Clinical Database AnalysisSutherland, Sarah C. 28 January 2013 (has links)
Newborn screening programs measure analyte levels in neonatal blood spots to identify individuals at high risk of disease. Carnitine and acylcarnitine levels are primary markers used in the detection of fatty acid oxidation disorders. These analytes may be influenced by certain pre/perinatal or newborn screening related factors. The primary objective of this study was to explore the association between these characteristics and levels of blood carnitines and acylcarnitines in the newborn population. The study was composed of two parts: a systematic review and a clinical database analysis of existing newborn screening data. The systematic review results suggested considerable variability across studies in the presence and directionality of associations between analyte levels and birth weight, gestational age, age at time of blood spot collection, type of sample, and storage time. Sex was not significantly associated with carnitine or acylcarnitine levels in neonatal blood. We identified a need to more fully investigate a potential interaction between gestational age and birth weight in regard to analyte levels. The secondary data analyses indicated a statistically significant relationship between analyte levels and all perinatal / infant and newborn screening related factors of interest, but effect sizes were generally small. The interaction between gestational age and birth weight was significant in all models; when further explored through graphical analysis with conditional means, extremely premature neonates stood out as having distinct analyte patterns in relation to birth weight. Variation in the ratio of total acylcarnitine to free carnitine was better accounted for by the perinatal and newborn factors than was variation in any individual carnitine or acylcarnitine, indicating that proportions of carnitine and acylcarnitines may be more important in understanding an individual’s metabolic functioning than individual analyte levels. A low proportion of variation was explained in all multivariate models, supporting the use of universal algorithms in newborn screening and suggesting the need for further large scale empirical research targeted at previously unaccounted for perinatal factors such as birth stress.
|
Page generated in 0.0525 seconds