• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 4
  • 1
  • Tagged with
  • 5
  • 5
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

3D Reconstruction of Simulated Bridge Pier Local Scour Using Green Laser and HydroLite Sonar.

Unknown Date (has links)
Scour is the process of sediment erosion around bridge piers and abutments due to natural and man-made hydraulic activities. Excessive scour is a critical problem that is typically handled by enforcing design requirements that make the submerged structures more resilient. The purpose of this research is to demonstrate the feasibilities of the Optical- Based Green Laser Scanner and HydroLite Sonar in a laboratory setting to capture the 3D profile of simulated local scour holes. The Green Laser had successfully reconstructed a 3D point-cloud imaging of scour profiles under both dry and clear water conditions. The derived scour topography after applying water refraction correction was compared with the simulated scour hole, and was within 1% of the design dimensions. The elevations at the top and bottom surfaces of the 6.5-inch scour hole were -46.6 and -53.11 inches from the reference line at the origin (0,0,0) of the laser scanner. The HydroLite Sonar recorded hydrographical survey points of the scour’s interior surface. The survey points were then processed using MATLAB to obtain a 3D mesh triangulation. / Includes bibliography. / Thesis (M.S.)--Florida Atlantic University, 2018. / FAU Electronic Theses and Dissertations Collection
2

Enhancing evaluation techniques using Mutation Operator Production Rule System and accidental fault methodology

Gupta, Pranshu January 1900 (has links)
Doctor of Philosophy / Department of Computing and Information Sciences / David A. Gustafson / Software testing is an essential component of software development life cycle, and certain software testing methodologies require enormous amounts of time and expense in order to detect and correct errors in a software system. The two primary goals of any testing methodology are error detection and increased reliability. Each methodology utilizes a unique technique to achieve these goals and detect faults in the software. In this paper, an evaluation approach is presented that can enhance evaluation techniques for software testing methodologies. Firstly, a new framework, Mutation Operator Production Rule System (MOPRS), is introduced that allows specifications of mutation operators that can be effective, precise, and focused on object-oriented faults. A new concept of effective mutation operator has been added to this system. An effective mutation operator is a precise set of rules that when applied to a program creates a set of mutants, which when killed by a test suite, will mean that further seeded or accidental faults characterized by the same fault type are highly likely to be killed by the same test suite. These effective mutation operators focus on fault types specific to object-oriented programming concepts. As a result, object-oriented faults are detected instead of finding traditional faults common to non-object-oriented and object-oriented programming. These mutation operators cover the gaps in the existing set of mutation operators. An evaluation method is described that can enhance the evaluation techniques, Accidental Fault Methodology (AFM), for software testing methodologies. When effective mutation operators are used along with this evaluation technique, it will demonstrate if the software testing methodology successfully detected induced faults and also any accidental faults specific to the object-oriented fault type.
3

Evaluation Techniques and Graph-Based Algorithms for Automatic Summarization and Keyphrase Extraction

Hamid, Fahmida 08 1900 (has links)
Automatic text summarization and keyphrase extraction are two interesting areas of research which extend along natural language processing and information retrieval. They have recently become very popular because of their wide applicability. Devising generic techniques for these tasks is challenging due to several issues. Yet we have a good number of intelligent systems performing the tasks. As different systems are designed with different perspectives, evaluating their performances with a generic strategy is crucial. It has also become immensely important to evaluate the performances with minimal human effort. In our work, we focus on designing a relativized scale for evaluating different algorithms. This is our major contribution which challenges the traditional approach of working with an absolute scale. We consider the impact of some of the environment variables (length of the document, references, and system-generated outputs) on the performance. Instead of defining some rigid lengths, we show how to adjust to their variations. We prove a mathematically sound baseline that should work for all kinds of documents. We emphasize automatically determining the syntactic well-formedness of the structures (sentences). We also propose defining an equivalence class for each unit (e.g. word) instead of the exact string matching strategy. We show an evaluation approach that considers the weighted relatedness of multiple references to adjust to the degree of disagreements between the gold standards. We publish the proposed approach as a free tool so that other systems can use it. We have also accumulated a dataset (scientific articles) with a reference summary and keyphrases for each document. Our approach is applicable not only for evaluating single-document based tasks but also for evaluating multiple-document based tasks. We have tested our evaluation method for three intrinsic tasks (taken from DUC 2004 conference), and in all three cases, it correlates positively with ROUGE. Based on our experiments for DUC 2004 Question-Answering task, it correlates with the human decision (extrinsic task) with 36.008% of accuracy. In general, we can state that the proposed relativized scale performs as well as the popular technique (ROUGE) with flexibility for the length of the output. As part of the evaluation we have also devised a new graph-based algorithm focusing on sentiment analysis. The proposed model can extract units (e.g. words or sentences) from the original text belonging either to the positive sentiment-pole or to the negative sentiment-pole. It embeds both (positive and negative) types of sentiment-flow into a single text-graph. The text-graph is composed with words or phrases as nodes, and their relations as edges. By recursively calling two mutually exclusive relations the model builds the final rank of the nodes. Based on the final rank, it splits two segments from the article: one with highly positive sentiment and the other with highly negative sentiments. The output of this model was tested with the non-polar TextRank generated output to quantify how much of the polar summaries actually covers the fact along with sentiment.
4

Summarizing the Results of a Series of Experiments : Application to the Effectiveness of Three Software Evaluation Techniques

Olorisade, Babatunde Kazeem January 2009 (has links)
Software quality has become and persistently remains a big issue among software users and developers. So, the importance of software evaluation cannot be overemphasized. An accepted fact in software engineering is that software must undergo evaluation process during development to ascertain and improve its quality level. In fact, there are too many techniques than a single developer could master, yet, it is impossible to be certain that software is free of defects. Therefore, it may not be realistic or cost effective to remove all software defects prior to product release. So, it is crucial for developers to be able to choose from available evaluation techniques, the one most suitable and likely to yield optimum quality results for different products - it bogs down to choosing the most appropriate for different situations. However, not much knowledge is available on the strengths and weaknesses of the available evaluation techniques. Most of the information related to the techniques available is focused on how to apply the techniques but not on the applicability conditions of the techniques – practical information, suitability, strengths, weaknesses etc. This research focuses on contributing to the available applicability knowledge of software evaluation techniques. More precisely, it focuses on code reading by stepwise abstraction as representative of the static technique, as well as equivalence partitioning (functional technique) and decision coverage (structural technique) as representatives of the dynamic technique. The specific focus of the research is to summarize the results of a series of experiments conducted to investigate the effectiveness of these techniques among other factors. By effectiveness in this research, we mean the potential of each of the techniques to generate test cases capable of revealing software faults in the case of the dynamic techniques or the ability of the static technique to generate abstractions that will aid the detection of faults. The experiments used two versions of three different programs with seven different faults seeded into each of the programs. This work uses the results of the eight different experiments performed and analyzed separately, to explore this fact. The analysis results were pooled together and jointly summarized in this research to extract a common knowledge from the experiments using a qualitative deduction approach created in this work as it was decided not to use formal aggregation at this stage. Since the experiments were performed by different researchers, in different years and in some cases at different site, there were several problems that have to be tackled in order to be able to summarize the results. Part of the problems is the fact that the data files exist in different languages, the structure of the files are different, different names is used for data fields, the analysis were done using different confidence level etc. The first step, taken at the inception of this research was to apply all the techniques to the programs used during the experiments in order to detect the faults. This purpose of this personal experience with the experiment is to be familiarized and get acquainted to the faults, failures, the programs and the experiment situations in general and also, to better understand the data as recorded from the experiments. Afterwards, the data files were recreated to conform to a uniform language, data meaning, file style and structure. A well structured directory was created to keep all the data, analysis and experiment files for all the experiments in the series. These steps paved the way for a feasible results synthesis. Using our method, the technique, program, fault, program – technique, program – fault and technique – fault were selected as main and interaction effects having significant knowledge relevant to the analysis summary result. The result, as reported in this thesis, indicated that the functional technique and the structural technique are equally effective as far as the programs and faults in these experiments are concerned. Both perform better than the code review. Also, the analysis revealed that the effectiveness of the techniques is influenced by the fault type and the program type. Some faults were found to exhibit better behavior with certain programs, some were better detected with certain techniques and even the techniques yield different result in different programs. / I can alternatively be contacted through: qasimbabatunde@yahoo.co.uk
5

An Empirical Evaluation & Comparison of Effectiveness & Efficiency of Fault Detection Testing Techniques

Natraj, Shailendra January 2013 (has links)
Context: The thesis is the analysis work of the replication of software experiment conducted by Natalia and Sira at Technical University of Madrid, SPAIN. The empirical study was conducted for the verification and validation of experimental data, and to evaluate the effectiveness and efficiency of the testing techniques. The analysis blocks, considered for the analysis were observable fault, failure visibility and observed faults. The statistical data analysis involved the ANOVA and Classification package of SPSS. Objective: To evaluate and compare the result obtained from the statistical data analysis. To establish the verification and validation of effectiveness and efficiency of testing techniques by using ANOVA and Classification tree analysis for percentage subject, percentage defect-subject and values (Yes / No) for each of the blocks. RQ1: Empirical evaluation of effectiveness of fault detection testing technique, using data analysis (ANOVA and Classification tree package). For the blocks (observable fault, failure visibility and observed faults) using ANOVA and Classification tree. RQ2: Empirical evaluation of efficiency of fault detection technique, based on time and number of test cases using ANOVA. RQ3: Comparison and inference of the obtained results for both effectiveness and efficiency. Method:The research will be focused on the statistical data analysis to empirically evaluate the effectiveness and efficiency of the fault detection technique for the experimental data collected at UPM (Technical university of Madrid, SPAIN). Empirical Strategy Used: Software Experiment. Results: Based on the planned research work. The analysis result obtained for the observable fault types were standardized (Ch5). Within the observable fault block, both the techniques, functional and structural were equally effective. In the failure visibility block, the results were partially standardized. The program types nametbl and ntree were equally effective in fault detection than cmdline. The result for observed fault block was partially standardized and diverse. The list for significant factors in this blocks were program types, fault types and techniques. In the efficiency block, the subject took less time in isolating the fault in the program type cmdline. Also the efficiency in fault detection was seen in cmdline with the help of generated test cases. Conclusion:This research will help the practitioners in the industry and academic in understanding the factors influencing the effectiveness and efficiency of testing techniques.This work also presents a comprehensive analysis and comparison of results of the blocks observable fault, failure visibility and observed faults. We discuss the factors influencing the efficiency of the fault detection techniques. / shailendra.natraj@gmail.com +4917671952062

Page generated in 0.1187 seconds