• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 5
  • 3
  • 3
  • 1
  • 1
  • Tagged with
  • 17
  • 17
  • 6
  • 4
  • 4
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Improving semen identification and quantitation using protein mass spectrometry

Niles, Sydney 17 June 2019 (has links)
Studies have highlighted a growing national problem regarding the number of untested Sexual Assault Kits (SAKs). A 2011 National Institute of Justice report revealed Los Angeles alone had 10,000 untested SAKs. This backlog has fueled the need for specific and efficient testing of SAK evidence. In traditional workflows, serology tests are used to indicate the presence of a targeted bodily fluid and prioritize samples for genetic analysis. However, given the lack of sensitivity and specificity of modern serological assays, current SAK workflows often skip serological identification altogether for a “direct to DNA” approach. While these Y-Screen workflows achieve rapid screening of samples for the presence of a detectible male contributor, they do not provide any serological information. As a result, samples lack what can be critical investigative context. Improved serological capabilities with enhanced sensitivity and specificity would provide greater confidence in results for the confirmatory identification of seminal fluid. At a minimum, forensic biologists should understand the limitations associated with traditional serological approaches to seminal fluid identification when processing SAK samples. Current serological techniques based on antigen-antibody binding have exhibited both sensitivity and specificity limitations. False positive results for semen can be obtained by non-target biological fluids such as breast milk, urine, and vaginal fluid, or by non-specific binding events. This study evaluates a promising emerging technique that combines high specificity protein biomarker detection with targeted mass spectrometry. This research targeted human-specific peptide markers for seminal fluid proteins and peptide standards to perform quantification of seminal fluid peptide targets using an Agilent 6495 mass spectrometer coupled to a 1290 series liquid chromatograph. This approach has shown to be both more specific and sensitive in identifying a bodily fluid compared to current immunological based approaches. Thus, this proteomic workflow was used to evaluate authentic false positive rates of current immunochromatographic techniques for seminal fluid identification. Self-collected vaginal swabs collected from participants not engaging in barrier-free vaginal intercourse with male partners were tested using various immunochromatographic assays designed to detect both semenogelin (Sg) (RSID™-Semen) and prostate specific antigen (PSA) (ABAcard® p30 Test and SERATEC® PSA Semiquant). Similarly, three seminal fluid biomarkers (semenogelin 1, semenogelin 2, and prostate specific antigen) were used for seminal fluid identification via mass spectrometry. Any samples producing positive results on any immunochromatographic assay were evaluated to determine whether the target protein was actually present at levels above the reported sensitivity limits of the lateral flow tests. Additionally, Sperm HY-LITER™ Express was used to microscopically confirm the absence of spermatozoa in all samples producing positive immunochromatographic results. In addition to using the quantitative proteomic assay to estimate the rate of authentic false positive results associated with lateral flow assays, this research sought to establish the correlation (or lack thereof) between absolute quantitation of seminal fluid markers and the ability to successfully generate DNA profiles. Self-collected post-coital swabs from donors engaging in barrier free vaginal intercourse with male partners over varied periods of time between 1-8 days after intercourse were collected. All samples were analyzed using the quantitative seminal fluid protein mass spectrometry assay, once again targeting SgI, SgII, and PSA. Both autosomal STR profiles (GlobalFiler™) and Y-STR profiles (Yfiler™ Plus) were subsequently generated. With regard to immunochromatographic assay false positive rates, a total of 17 false positives for semen were observed (n=150), 14 of which were consistent with PSA and 3 with Sg, for a corresponding total false positive rate of 9.3% and 2%, respectively (11.3% overall). These samples were all confirmed to be sperm negative with mass spectrometry and microscopic analysis. This data supports the use of current immunochromatographic assays for the presumptive detection of seminal fluid while also providing further support for the improved specificity of alternative serological approaches using mass spectrometry identification of biological targets. With regard to the relationship between quantitative levels of target seminal fluid peptides and the ability to generate STR profiles from vaginal swabs collected at various post coital intervals, a total of 61 post-coital samples were tested. Of these, 48 samples had a seminal fluid target greater than the limit of quantitation for the mass spectrometry assay and 26 produced an STR (n=9) and/or Y-STR (n=10) profile. A correlation between peptide quantitation and ability to generate a genetic profile was unable to be determined from this initial sample set. Overall, however, it has been demonstrated that the use of proteomic mass spectrometry for the identification of seminal fluid targets (with its enhanced sensitivity and specificity) would enable forensic practitioners to make better use of serological information during the analysis of challenging sexual assault samples.
2

Welcoming Quality in Non-Significance and Replication Work, but Moving Beyond the p-Value: Announcing New Editorial Policies for Quantitative Research in JOAA

McBee, Matthew T., Matthews, Michael S. 01 May 2014 (has links)
The self-correcting nature of psychological and educational science has been seriously questioned. Recent special issues of Perspectives on Psychological Science and Psychology of Aesthetics, Creativity, and the Arts have roundly condemned current organizational models of research and dissemination and have criticized the perverse incentive structure that tempts researchers into generating and publishing false positive findings. At the same time, replications are rarely attempted, allowing untruths to persist in the literature unchallenged. In this article, the editors of the Journal of Advanced Academics consider this situation and announce new policies for quantitative submissions. They are (a) an explicit call for replication studies; (b) new instructions directing reviewers to base their evaluation of a study’s merit on the quality of the research design, execution, and written description, rather than on the statistical significance of its results; and (c) an invitation to omit statistical hypothesis tests in favor of reporting effect sizes and their confidence limits.
3

Alvaro Uribe Velez: Maintaining Popularity Despite Significant Government Scandals

Canas Baena, Juliana A 01 January 2016 (has links)
Despite the scandals and the increase in violence towards vulnerable communities, Uribe and his government still had an extremely high approval rating. His popularity may be explained as a result of the majority of citizens benefitting from his policies because while they violate human rights, they function as mechanisms that support and enhance his success in delivering stability to Colombia’s middle and upper-classes. Moreover, Uribe did not address critics of his government or the media, instead he created a discourse that his government and its policies were responsible for successfully combatting the guerrillas and cartels and improving the economy. Thus many Colombians may have chosen to continue supporting him because they saw his clandestine government tactics as necessary.
4

False and True Positives in Arthropod Thermal Adaptation Candidate Gene Lists

Herrmann, Maike, Yampolsky, Lev Y. 01 June 2021 (has links)
Genome-wide studies are prone to false positives due to inherently low priors and statistical power. One approach to ameliorate this problem is to seek validation of reported candidate genes across independent studies: genes with repeatedly discovered effects are less likely to be false positives. Inversely, genes reported only as many times as expected by chance alone, while possibly representing novel discoveries, are also more likely to be false positives. We show that, across over 30 genome-wide studies that reported Drosophila and Daphnia genes with possible roles in thermal adaptation, the combined lists of candidate genes and orthologous groups are rapidly approaching the total number of genes and orthologous groups in the respective genomes. This is consistent with the expectation of high frequency of false positives. The majority of these spurious candidates have been identified by one or a few studies, as expected by chance alone. In contrast, a noticeable minority of genes have been identified by numerous studies with the probabilities of such discoveries occurring by chance alone being exceedingly small. For this subset of genes, different studies are in agreement with each other despite differences in the ecological settings, genomic tools and methodology, and reporting thresholds. We provide a reference set of presumed true positives among Drosophila candidate genes and orthologous groups involved in response to changes in temperature, suitable for cross-validation purposes. Despite this approach being prone to false negatives, this list of presumed true positives includes several hundred genes, consistent with the “omnigenic” concept of genetic architecture of complex traits.
5

Using Machine Learning Techniques to Improve Static Code Analysis Tools Usefulness

Alikhashashneh, Enas A. 08 1900 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / This dissertation proposes an approach to reduce the cost of manual inspections for as large a number of false positive warnings that are being reported by Static Code Analysis (SCA) tools as much as possible using Machine Learning (ML) techniques. The proposed approach neither assume to use the particular SCA tools nor depends on the specific programming language used to write the target source code or the application. To reduce the number of false positive warnings we first evaluated a number of SCA tools in terms of software engineering metrics using a highlighted synthetic source code named the Juliet test suite. From this evaluation, we concluded that the SCA tools report plenty of false positive warnings that need a manual inspection. Then we generated a number of datasets from the source code that forced the SCA tool to generate either true positive, false positive, or false negative warnings. The datasets, then, were used to train four of ML classifiers in order to classify the collected warnings from the synthetic source code. From the experimental results of the ML classifiers, we observed that the classifier that built using the Random Forests (RF) technique outperformed the rest of the classifiers. Lastly, using this classifier and an instance-based transfer learning technique, we ranked a number of warnings that were aggregated from various open-source software projects. The experimental results show that the proposed approach to reduce the cost of the manual inspection of the false positive warnings outperformed the random ranking algorithm and was highly correlated with the ranked list that the optimal ranking algorithm generated.
6

Improving the precision of an Intrusion Detection System using Indicators of Compromise : - a proof of concept -

Lejonqvist, Gisela, Larsson, Oskar January 2018 (has links)
The goal of this research is to improve an IDS so that the percentage of true positives is high, an organisation can cut time and cost and use its resources in a more optimal way. This research goal was to prove that the precision of an intrusion detection system (IDS), in terms of producing lower rate of false positives or higher rate of true alerts, can be achieved by parsing indicators of compromise (IOC) to gather information, that combined with system-specific knowledge will be a solid base for manual fine-tuning of IDS-rules. The methodology used is Design Science Research Methodology (DSRM) because it is used for research that aims to answer an existing problem with a new or improved solution. A part of that solution is a proposed process for tuning of an arbitrary intrusion detection system. The implemented and formalized process Tuned Intrusion Detection System (TIDS) has been designed during this research work, aiding us in presenting and performing validation tests in a structured and robust way. The testbed consisted of a Windows 10 operating system and a NIDS implementation of Snort as an IDS. The work was experimental, evaluated and improved regarding IDS rules and tools over several iterations. With the use of recorded data traffic from the public dataset CTU-13, the difference between the use of tuned versus un-tuned rules in an IDS was presented in terms of precision of the alerts created by the IDS. Our contributions were that the concept holds; the precision can be improved by adding custom rules based on known parameters in the network and features of the network traffic and disabling rules that were out of scope. The second contribution is the TIDS process, as designed during the thesis work, serving us well during the process.
7

Analysis of Security Findings and Reduction of False Positives through Large Language Models

Wagner, Jonas 18 October 2024 (has links)
This thesis investigates the integration of State-of-the-Art (SOTA) Large Language Models (LLMs) into the process of reassessing security findings generated by Static Application Security Testing (SAST) tools. The primary objective is to determine whether LLMs are able to detect false positives (FPs) while maintaining a high true positive (TP) rate, thereby enhancing the efficiency and effectiveness of security assessments. Four consecutive experiments were conducted, each addressing specific research questions. The initial experiment, using a dataset of security findings extracted from the OWASP Bench- mark, identified the optimal combination of context items provided by the SAST tool Spot- Bugs, which, when used with GPT-3.5 Turbo, reduced FPs while minimizing the loss of TPs. The second experiment, conducted on the same dataset, demonstrated that advanced prompting techniques, particularly few-shot Chain-of-Thought (CoT) prompting combined with Self-Consistency (SC), further improved the reassessment process. The third experiment compared both proprietary and open-source LLMs on an OWASP Benchmark dataset about one-fourth the size of the previously used dataset. GPT-4o achieved the highest performance, detecting 80 out of 128 FPs without missing any TPs, resulting in a perfect TPR of 100% and a decrease in FPR by 41.27 percentage points. Meanwhile, Llama 3.1 70B detected 112 out of the 128 FPs but missed 10 TPs, resulting in a TPR of 94.94% and a reduction in FPR by 56.62 percentage points. To validate these findings in a real-world context, the approach was applied to a dataset generated from the open-source project Mnestix using multiple SAST tools. GPT-4o again emerged as the top performer, detecting 26 out of 68 FPs while only missing one TP, resulting in a TPR decreased by 2.22 percentage points but simultaneously an FPR decreased 37.57 percentage points.:Table of Contents IV List of Figures VI List of Tables VIII List of Source Codes IX List of Abbreviations XI 1. Motivation 1 2. Background 3 3. Related Work 17 4. Concept 31 5. Preparing a Security Findings Dataset 39 6. Implementing a Workflow 51 7. Identifying Context Items 67 8. Comparing Prompting Techniques 85 9. Comparing Large Language Models 101 10.Evaluating Developed Approach 127 11.Discussion 141 12.Conclusion 145 A. Appendix: Figures 147 A.1. Repository Directory Tree 148 A.2. Precision-Recall Curve of Compared Large Language Models 149 A.3. Performance Metrics Self-Consistency on Mnestix Dataset 150 B. Appendix: Tables 151 B.1. Design Science Research Concept 151 C. Appendix: Code 153 C.1. Pydantic Base Config Documentation 153 C.2. Pydantic LLM Client Config Documentation 155 C.3. LLM BaseClient Class 157 C.4. Test Cases Removed From Dataset 158
8

La caractérisation des exoplanètes en transit par vélocimétrie radiale

Santerne, Alexandre 26 October 2012 (has links)
La recherche et caractérisation de planètes extrasolaires en transit (i.e., qui passent devant leur étoile, vue depuis la Terre) est un domaine important de la planétologie car ces planètes permettent de contraindre les processus de formation, d'évolution et de migration des systèmes planétaires. Les missions spatiales CoRoT et Kepler ont permis, ces dernières années, de découvrir plusieurs milliers de candidats-planètes en transit. Cependant, ces candidats-planètes doivent être confirmés afin d'exclure tout scénario de faux-positifs pouvant imiter un transit d'une exo-planète. Pour cela, l'une des méthodes possible consiste à mener des observations complémentaires de vitesse radiale permettant de mesurer la masse et les paramètres orbitaux de l'objet qui transite et ainsi de pouvoir déterminer la nature des candidats-planètes. Au cours de ma thèse, je me suis attaché à résoudre la nature des candidats-planètes en transit issues des missions spatiales CoRoT et Kepler en menant des observations avec les spectrographes SOPHIE et HARPS, ce qui m'a permis d'identifier plusieurs nouvelles planètes extrasolaires en transit. J'ai également pu mesurer le taux de faux-positif de la mission Kepler, égal à 35% pour les candidats planètes-géantes à courte période orbitale, contredisant les précédentes estimations, beaucoup plus optimistes. J'ai également participé au développement d'un nouveau logiciel, "PASTIS", qui permet de valider statistiquement des planètes extrasolaires de faible masse, trop petites pour être caractérisées grâce aux spectrographes actuels. Ce logiciel permettra, à terme, de valider des dizaines de planètes de faible masse issues des missions CoRoT et Kepler. / The search and characterization of transiting extrasolar planets (i.e. that pass in front of their host star, as seen from the Earth) is an important domain of planetology since these planets constrain the formation, evolution and migration process of planetary systems. The CoRoT (CNES) and Kepler (NASA) space missions permit, these last years, to discover several thousand of transiting-planet candidates. However, these planet candidates need to be confirmed in order to exclude all false positive scenario that can mimic a planetary transit. For that, one of the method consist on performing radial velocity follow-up observations to measure the transiting object's mass and orbital parameters and thus, to determine the nature of planet candidates.During my PhD thesis, I tried to resolve the nature of transiting planet candidates from the CoRoT and Kepler space missions. For that, I performed follow-up observations with the SOPHIE (OHP) and HARPS (ESO) spectrographs that were used to discover several new transiting extrasolar planets. I also measured the Kepler false-positive rate, equal to 35% for giant close-in exoplanet candidates, contradicting previous estimations, much more optimistic.I also participate to the development of a new software, called "PASTIS", which objective is to validate statistically low-mass transiting exoplanets out of reach for current spectrographs. This new tool will, in a near future, validate tens of low-mass planets from the CoRoT and Kepler space missions.
9

Detecção de regiões de massa por análise bilateral adaptada à densidade da mama utilizando índices de similaridade e redes neurais convolucionais / Detection of Mass Regions by Bilateral Analysis Adapted to Breast Density using Similarity and Convolutional Neural Networks

Diniz , João Otávio Bandeira 03 February 2017 (has links)
Submitted by Rosivalda Pereira (mrs.pereira@ufma.br) on 2017-05-30T21:09:57Z No. of bitstreams: 1 JoaoDiniz.pdf: 2606559 bytes, checksum: 262a9c98db11667d3a482c378ab78b50 (MD5) / Made available in DSpace on 2017-05-30T21:09:57Z (GMT). No. of bitstreams: 1 JoaoDiniz.pdf: 2606559 bytes, checksum: 262a9c98db11667d3a482c378ab78b50 (MD5) Previous issue date: 2017-02-03 / Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES) / Breast cancer is the type of cancer that most affects women and is one of the leading causes of death worldwide. Aiming to aid the detection and diagnosis of this pathology, several techniques in the image area are being created serving as a second opinion. It is known that mammograms of the left and right breast present a high degree of symmetry, and when there is a sudden difference between the pairs, it can be considered suspicious. It is also emphasized that the breast can present different density of the tissue and this can be a factor that makes difficult the detection and diagnosis of the lesions. Thus, the objective of this work is to develop an automatic methodology for the detection of mass regions in pairs of digitized mammograms adapted to breast density, using image processing and species comparison techniques to determine asymmetric regions in the breasts together with neural convolutional networks for Classification of breast density and regions in masses and not masses. The proposed methodology is divided into two phases: training phase and test phase. In the training phase will be created three models using convolutional neural networks, the first able to classify the breast as density and the last two to classify regions of mass and non-mass in dense and non-dense breasts.The steps are in aligning the breasts so that it is possible to make a comparison between the pairs. When comparing, asymmetric regions will be segmented, these regions will undergo a process of reduction of false positives in order to eliminate regions that are not masses. Before classifying the remaining regions, the breasts undergo the process of density classification by the model obtained in the training phase. Finally, for each type of breast, a model will classify the regions segmented into masses and not masses. The methodology presented excellent results, in the non-dense breasts reaching sensitivity of 91.56 %, specificity of 90.73 %, accuracy of 91.04 % and rate of 0.058 false positives per image. Dense breasts showed 90.36 % sensitivity, 96.35 % specificity, 94.84 % accuracy and 0.027 false positives per image. The results show that the methodology is promising and can be used to compose a CAD system, serving as a second option for the expert in the task of detecting mass regions. / O cãncer de mama é o tipo de câncer que mais acomete as mulheres e uma das principais causas de morte em todo o mundo. Visando auxiliar a detecção e diagnóstico desta patologia, diversas técnicas na érea de imagem estão sendo criadas servindo como um auxílio ao especialista. Sabe-se que mamografias esquerda e direita apresentam alto grau simetria, e quanto há uma diferença brusca entre os pares, pode-se considerar algo de suspeito. Ressalta-se também que a mama pode apresentar densidade diferente do tecido e isso pode ser um fator que dificulte na detecção e diagnóstico das lesões. Assim, o objetivo deste trabalho é desenvolver uma metodologia automática de detecção de regiões de massa em pares de mamografias digitalizadas adaptada à densidade da mama, utilizando técnicas de processamento de imagens e comparação de espécies para determinar regiões assimétricas nas mamas juntamente com redes neurais convolucionais para classificação de densidade da mama e de regiões em massas e não massas. A metodologia proposta é dividida em duas fases: fase de treinamento e fase de teste. Na fase de treinamento serão criados três modelos utilizando redes neurais convolucionais, o primeiro capaz de classificar a mama quanto a densidade e os dois últimos classificam regiões de massa e não massa em mamas densas e não densas. Na fase de teste, imagens de mamografia da base DDSM passarão por várias etapas a fim de segmentar regiões assimétricas que serão posteriormente classificadas. As etapas resumem-se em alinhar as mamas para que seja possível fazer uma comparação entre os pares. Ao comparar, serão segmentadas regiões assimétricas, essas regiões passarão por processo de redução de falsos positivos a fim de eliminar regiões que não são massas. Antes de classificar as regiões restantes, as mamas passam pelo processo de classificação de densidade pelo modelo obtido na fase de treinamento. Por fim, para cada tipo de mama, um modelo irá classificar as regiões segmentadas em massas e não massas. O método proposto apresentou resultados promissores, nas mamas não densas atingiu sensibilidade de 91,56%, especificidade de 90,73%, 91,04% de acurácia e taxa de 0,058 falsos positivos por imagem. As mamas densas, apresentaram resultados de 90,36% de sensibilidade, 96,35% de especificidade, 94,84% de acurácia e 0,027 falsos positivos por imagem. Os resultados mostram que a metodologia é promissora e pode ser utilizada para compor um sistema CAD na tarefa de detectar regiões de massas.
10

USING MACHINE LEARNING TECHNIQUES TO IMPROVE STATIC CODE ANALYSIS TOOLS USEFULNESS

Enas Ahmad Alikhashashneh (7013450) 16 October 2019 (has links)
<p>This dissertation proposes an approach to reduce the cost of manual inspections for as large a number of false positive warnings that are being reported by Static Code Analysis (SCA) tools as much as possible using Machine Learning (ML) techniques. The proposed approach neither assume to use the particular SCA tools nor depends on the specific programming language used to write the target source code or the application. To reduce the number of false positive warnings we first evaluated a number of SCA tools in terms of software engineering metrics using a highlighted synthetic source code named the Juliet test suite. From this evaluation, we concluded that the SCA tools report plenty of false positive warnings that need a manual inspection. Then we generated a number of datasets from the source code that forced the SCA tool to generate either true positive, false positive, or false negative warnings. The datasets, then, were used to train four of ML classifiers in order to classify the collected warnings from the synthetic source code. From the experimental results of the ML classifiers, we observed that the classifier that built using the Random Forests</p> <p>(RF) technique outperformed the rest of the classifiers. Lastly, using this classifier and an instance-based transfer learning technique, we ranked a number of warnings that were aggregated from various open-source software projects. The experimental results show that the proposed approach to reduce the cost of the manual inspection of the false positive warnings outperformed the random ranking algorithm and was highly correlated with the ranked list that the optimal ranking algorithm generated.</p>

Page generated in 0.0739 seconds