• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 9
  • 3
  • 2
  • 1
  • Tagged with
  • 17
  • 17
  • 9
  • 8
  • 5
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Síťový tester / Network tester

Haško, Juraj January 2019 (has links)
The thesis deals with data network testing. The aim of the thesis is to design a methodology for the comprehensive measurement of network transmission parameters and design of the tester concept and realisation by helping to extend the existing JMeter program.
2

From pesticide degradation products to legacy toxicants and emerging contaminants : novel analytical methods, approaches, and modeling

Forsberg, Norman D. 03 April 2014 (has links)
Environmental toxicologists and public health officials are responsible for assisting in the identification, management, and mitigation of public health hazards. As a result, there is a continued need for robust analytical tools that can aid in the rapid quantification and characterization of chemical exposure. In the first research phase, we demonstrated that a current tool for estimating human organophosphate pesticide exposure, measuring dialkyl phosphate (DAPs) metabolites in urine as chemical biomarkers of pesticide exposure, could represent exposure to DAPs themselves and not to pesticides. We showed that DAPs are metabolically stable, have high oral bioavailability, and are rapidly excreted in the urine following oral exposure. Results suggest that DAP measurements may lead to overestimates of human organophosphate pesticide exposure. In the second phase of research, a quick, easy, cheap, effective, rugged, and safe (QuEChERS) based analytical method was developed and validated for quantifying polycyclic aromatic hydrocarbons (PAHs) in biotic matrices with fat contents that ranged from 3 to 11%. Our method improved PAH recoveries 50 to 200% compared to traditional QuEChERS methods, performed as well or better than state of the art Soxhlet and accelerated solvent extraction methods, had sensitivity useful for chemical exposure assessments, and reduced sample preparation costs by 10 fold. The validated QuEChERS method was subsequently employed in a human exposure assessment. Little is known about how traditional Native American fish smoke-preserving methods impact PAH loads in smoked foods, Tribal PAH exposure, or health risks. Differences in smoked salmon PAH loads were not observed between Tribal smoking methods, where smoking methods were controlled for smoking structure and smoke source. PAH loads in Tribally smoked fish were up to 430 times greater than those measured in commercially available smoked fish. It is not likely that dietary exposure to non-carcinogenic PAHs at heritage ingestion rates of 300 grams per day poses an appreciable risk to human health. However, levels of PAHs in traditionally smoked fish may pose and elevated of risk of cancer if consumed at high rates over a life time. Accurately estimating PAH exposure in cases where aquatic foods become contaminated is often hindered by sample availability. To overcome this challenge, we developed a novel analytical approach to predict PAH loads in resident crustacean tissues based on passive sampling device (PSD) PAH measurements and partial least squares regression. PSDs and crayfish collected from 9 sites within, and outside of, the Portland Harbor Superfund site captured a wide range of PAH concentrations in a matrix specific manner. Partial least squares regression of crayfish PAH concentrations on freely dissolved PAH concentrations measured by PSDs lead to predictions that generally differed by less than 12 parts per billion from measured values. Additionally, most predictions (> 90%) were within 3-fold of measured values, while state of the art bioaccumulation factor approaches typically differ by 5 to 15-fold compared to measured values. In order to accurately characterize chemical exposure, new analytical approaches are needed that can simulate chemical changes in bioavailable PAH mixtures resulting from natural and/or remediation processes. An approach based on environmental passive sampling and in-laboratory UVB irradiation was developed to meet this need. Standard PAH mixtures prepared in-lab and passive sampling device extracts collected from PAH contaminated environments were used as model test solutions. UV irradiation of solutions reduced PAH levels 20 to 100% and lead to the formation of several toxic oxygenated-PAHs that have been previously measured in the environment. Site specific differences in oxygenated-PAH formation were also observed. The research presented in this dissertation can be used to advance chemical exposure estimation techniques, rapidly and cost-effectively quantify a suite of PAHs in biotic tissues, and simulate the effect of abiotic transformation processes on the bioavailable fraction of environmental contaminants. / Graduation date: 2013 / Access restricted to the OSU Community at author's request from April 3, 2013 - April 3, 2014
3

Deep learning for promoter recognition: a robust testing methodology

Perez Martell, Raul Ivan 29 April 2020 (has links)
Understanding DNA sequences has been an ongoing endeavour within bioinfor- matics research. Recognizing the functionality of DNA sequences is a non-trivial and complex task that can bring insights into understanding DNA. In this thesis, we study deep learning models for recognizing gene regulating regions of DNA, more specifi- cally promoters. We first consider DNA modelling as a language by training natural language processing models to recognize promoters. Afterwards, we delve into current models from the literature to learn how they achieve their results. Previous works have focused on limited curated datasets to both train and evaluate their models using cross-validation, obtaining high-performing results across a variety of metrics. We implement and compare three models from the literature against each other, us- ing their datasets interchangeably throughout the comparison tests. This highlights shortcomings within the training and testing datasets for these models, prompting us to create a robust promoter recognition testing dataset and developing a testing methodology, that creates a wide variety of testing datasets for promoter recognition. We then, test the models from the literature with the newly created datasets and highlight considerations to take in choosing a training dataset. To help others avoid such issues in the future, we open-source our findings and testing methodology. / Graduate
4

Evaluating the effects of data collection methodology on the assessment of situations with the riverside situational q-sort

Unknown Date (has links)
The practice of evaluating situations with the Riverside Situational Q-Sort (RSQ:Wagerman & Funder, 2009) is relatively new. The present study aimed to investigate the theoretical framework supporting the RSQ with regards to the potential confounds of emotional state and the use of Likert-type ratings. Data were collected from a sample of Florida Atlantic University students (N = 206). Participants were primed for either a positive or negative mood state and asked to evaluate a situation with the RSQ in either the Q-Sort or Likert-type response format. Results suggested that response format has a significant influence on RSQ evaluations, but mood and the interaction between mood and response format do not. Exploratory analyses were conducted to determine the underlying mechanisms responsible. / Includes bibliography. / Thesis (M.S.)--Florida Atlantic University, 2014. / FAU Electronic Theses and Dissertations Collection
5

A Comparison of Meta-Analytic Approaches to the Analysis of Reliability Estimates

Mason, Denise Corinne 10 July 2003 (has links)
In the last few years, several studies have attempted to meta-analyze reliability estimates. The initial study, to outline a methodology for meta-analyzing reliability coefficients, was published by Vacha-Haase in 1998. Vacha-Haase used a very basic meta-analytic model to find a mean effect size (reliability) across studies. There are two main reasons for meta-analyzing reliability coefficients. First, recent research has shown that many studies fail to report the appropriate reliability for the measure and population of the actual study (Vacha-Haase, Ness, Nilsson and Reetz, 1999; Whittington, 1998; Yin and Fan, 2000). Second, very little research has been published describing the way reliabilities for the same measure vary according to moderators such as time, form length, population differences in trait variability and others. Vacha-Haase (1998) proposed meta-analysis, as a method by which the impact of moderators may become better understood. Although other researchers have followed the Vacha-Haase example and meta-analyzed the reliabilities for several measures, little has been written about the best methodology to use for such analysis. Reliabilities are much larger on average than are validities, and thus tend to show greater skew in their sampling distributions. This study took a closer look at the methodology with which reliability can be meta-analyzed. Specifically, a Monte Carlo study was run so that population characteristics were known. This provided a unique ability to test how well each of three methods estimates the true population characteristics. The three methods studied were the Vacha-Haase method as outlined in her 1998 article, the well-known Hunter and Schmidt "bare bones method" (1990) and the random-effects version of Hedges' method as described by Lipsey and Wilson (2001). The methods differ both in how they estimate the random-effects variance component (or in one case, whether the random-effects variance component is estimated at all) and in how they treat moderator variables. Results showed which of these methods is best applied to reliability meta-analysis. A combination of the Hunter and Schmidt (1999) method and weighted least squares regression is proposed.
6

A comparison of meta-analytic approaches to the analysis of reliability estimates [electronic resource] / by Denise Corinne Mason.

Mason, Denise Corinne. January 2003 (has links)
Includes vita. / Title from PDF of title page. / Document formatted into pages; contains 114 pages. / Thesis (Ph.D.)--University of South Florida, 2003. / Includes bibliographical references. / Text (Electronic thesis) in PDF format. / ABSTRACT: In the last few years, several studies have attempted to meta-analyze reliability estimates. The initial study, to outline a methodology for meta-analyzing reliability coefficients, was published by Vacha-Haase in 1998. Vacha-Haase used a very basic meta-analytic model to find a mean effect size (reliability) across studies. There are two main reasons for meta-analyzing reliability coefficients. First, recent research has shown that many studies fail to report the appropriate reliability for the measure and population of the actual study (Vacha-Haase, Ness, Nilsson and Reetz, 1999; Whittington, 1998; Yin and Fan, 2000). Second, very little research has been published describing the way reliabilities for the same measure vary according to moderators such as time, form length, population differences in trait variability and others. / ABSTRACT: Vacha-Haase (1998) proposed meta-analysis, as a method by which the impact of moderators may become better understood. Although other researchers have followed the Vacha-Haase example and meta-analyzed the reliabilities for several measures, little has been written about the best methodology to use for such analysis. Reliabilities are much larger on average than are validities, and thus tend to show greater skew in their sampling distributions. This study took a closer look at the methodology with which reliability can be meta-analyzed. Specifically, a Monte Carlo study was run so that population characteristics were known. This provided a unique ability to test how well each of three methods estimates the true population characteristics. / ABSTRACT: The three methods studied were the Vacha-Haase method as outlined in her 1998 article, the well-known Hunter and Schmidt "bare bones method" (1990) and the random-effects version of Hedges' method as described by Lipsey and Wilson (2001). The methods differ both in how they estimate the random-effects variance component (or in one case, whether the random-effects variance component is estimated at all) and in how they treat moderator variables. Results showed which of these methods is best applied to reliability meta-analysis. A combination of the Hunter and Schmidt (1999) method and weighted least squares regression is proposed. / System requirements: World Wide Web browser and PDF reader. / Mode of access: World Wide Web.
7

Posouzení interní metodiky testování na konkrétním projektu a návrh vylepšení / Posouzení interní metodiky testování na konkrétním projektu a návrh vylepšení

Hofrichterová, Kateřina January 2015 (has links)
The thesis is focused on testing the banking software. The first part is contained introduction to the topic, introduced the project, including its objectives, the nature of the application, my role in it. Further described is a method Case study used throughout the work. The company is a researcher of the project for the client. The first objective is to describe the company's testing methodology by which the project progressed during, compare it with other methodologies with regard to their robustness, the degree of organization of work and domestic allocation. The second objective is to determine the bottlenecks including the determination of improvement. Each project is unique, so is suitable, another degree of organization. A control method of the project is agile. It is therefore expected framework anticipating complications and subsequent resolution of other areas to occur. The work defines the cause bottlenecks as the internal factor (human labor), external factors (legislative changes). Mapped was not only the current situation but also outlined the cause of the problem. And the third in terms of value-added most important goal is to design verification solutions experts. At work I described ways of solving problems. I woke them both employees solver side, so people from client companies. Experts from the client know good practices in the company and are able to assess the suitability due to the nature of society. Collaborators supply side, meanwhile, have extensive experience from similar projects. Their opinion on the issue can bring effective application of the Recommendation. The biggest benefit is not only the approval but also the subsequent implementation in the course of the entire project.
8

Metodika testování webových aplikací / Methodology of Testing Web Applications

Šplíchalová, Marcela January 2008 (has links)
The principle aim of this thesis is to create an unified methodical framework for a smaller Software Testing Department. Furthermore, its aim is to define and describe an important element of testing - the software mistake, define the way of its reporting and finally with the view of it to specify the troubleshooting areas of web applications. The last aim is to find a solution of how to publish this methodology. The aims of this thesis were reached by studying available theoretical findings and applications of principles that are known from notorious and approved methodologies which complexly focus on the software development. These principles were confronted with author's practical experience. Based on these processes, the methodology described above has been originated The contribution of the thesis is in the inner structure of the methodology, summarization of the most important information, application of practical personal experiences and adapting some elements of the methodology according to its usage in a small team. Other strong points are the proposals and recommendations of how to improve the situation on the testing department of a particular company, how to publish the methodology and how to maintain it in the future. The thesis is composed of three main parts. In the first chapter, the essential characteristics of testing, models of the life cycle of software development, sorts and levels of tests are given. The second chapter is the crucial part of the thesis. It describes the whole methodology - the main workflow and its details (processes), activities made during these processes, roles occurring in the methodology and the description of its responsibility (for activities and artefacts), artefacts made on the testing department, full description of mistake and its reporting and finally the summary of mistakes appearing in the web applications environment. The last chapter attends to the way of putting the methodology into operation - technical coverage of particular parts of the methodology in the present time, suggestion of improvement the testing in the future and possibilities of the publication of the methodology.
9

Způsoby ověření kvality aplikací a systémů (metodika, nástroje) / Common ways of controlling the quality of software applications and systems (methodology & tools)

Borůvka, Zdeněk January 2008 (has links)
Integral part of all systematically managed software development or maintenance projects is emphasis on continuous quality of all project activities. Because final quality of project deliverables (new or enhanced applications, preconfigured solutions as ie. SAP) is a very big influencer of project success and therefore also important influencer of long-term relationship between customer and contractor(s), this document focuses on ways how to proactively prevent from mistakes (within the whole software development lifecycle) and on techniques helping to establish better quality control of important deliverables through systematic approach, high quality tools and suitable metrics in software testing discipline. This document gradually exposes typical project areas where it is necessary to keep control on quality of project members' outputs, perceives testing in context of typical project consequences, offers practical recommendations in testing methodology, tools as well as widely tested technologies, and explains trends and risks in testing domain. The goal of this document is not only to document a wide range of possibilities given by frequently used testing techniques or tools but also to offer a practical guidance in deployment of test discipline. This document was written by comparing author's professional experience in software quality management with knowledge gathered by reading information sources attached to this document. This document consists of concrete conclusions of this comparison.
10

Automatizace regresního testování / Automation of regression testing

Čecháková, Lucie January 2015 (has links)
This study is primarily focused on software testing, especially on regression tests and their automation. The main objective is to introduce and verify a novel procedure for implementation and automation of software regression testing. Specific objectives include putting regression testing into the context of other types of tests applied to software testing, introduction of a novel Methodology for analysis of automation of regression tests, introduction of a novel Methodology for analysis of implementation of regression tests, practical verification of the applicability of both methodologies on a real project and a suggestion of how to adapt these methodologies on the basis of practical usage. The theoretical part of this study summarizes the basic theory of software testing, decomposing it in detail, and introducing its various levels, types and categories. It also presents the field of test automation, explains its advantages and disadvantages and introduces an overview of test types, which are generally recommended for automation. More attention is paid to regression testing and its prerequisites and potential for automation. The practical part of this study consists of the proposition of two methodologies, explaining their usage in a particular practical project and focuses on the evaluation of success of practical utilization of both methodologies. Based on this evaluation, these methodologies are consequently extended. Outputs of the study are also extended variants of Methodology for analysis of automation of regression tests and Methodology for analysis of implementation of regression tests, which are available for usage on other practical projects.

Page generated in 0.1085 seconds