• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 32
  • 6
  • 3
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 69
  • 69
  • 17
  • 16
  • 15
  • 12
  • 12
  • 12
  • 9
  • 9
  • 9
  • 9
  • 8
  • 8
  • 8
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Detecting Dissimilar Classes of Source Code Defects

2013 August 1900 (has links)
Software maintenance accounts for the most part of the software development cost and efforts, with its major activities focused on the detection, location, analysis and removal of defects present in the software. Although software defects can be originated, and be present, at any phase of the software development life-cycle, implementation (i.e., source code) contains more than three-fourths of the total defects. Due to the diverse nature of the defects, their detection and analysis activities have to be carried out by equally diverse tools, often necessitating the application of multiple tools for reasonable defect coverage that directly increases maintenance overhead. Unified detection tools are known to combine different specialized techniques into a single and massive core, resulting in operational difficulty and maintenance cost increment. The objective of this research was to search for a technique that can detect dissimilar defects using a simplified model and a single methodology, both of which should contribute in creating an easy-to-acquire solution. Following this goal, a ‘Supervised Automation Framework’ named FlexTax was developed for semi-automatic defect mapping and taxonomy generation, which was then applied on a large-scale real-world defect dataset to generate a comprehensive Defect Taxonomy that was verified using machine learning classifiers and manual verification. This Taxonomy, along with an extensive literature survey, was used for comprehension of the properties of different classes of defects, and for developing Defect Similarity Metrics. The Taxonomy, and the Similarity Metrics were then used to develop a defect detection model and associated techniques, collectively named Symbolic Range Tuple Analysis, or SRTA. SRTA relies on Symbolic Analysis, Path Summarization and Range Propagation to detect dissimilar classes of defects using a simplified set of operations. To verify the effectiveness of the technique, SRTA was evaluated by processing multiple real-world open-source systems, by direct comparison with three state-of-the-art tools, by a controlled experiment, by using an established Benchmark, by comparison with other tools through secondary data, and by a large-scale fault-injection experiment conducted using a Mutation-Injection Framework, which relied on the taxonomy developed earlier for the definition of mutation rules. Experimental results confirmed SRTA’s practicality, generality, scalability and accuracy, and proved SRTA’s applicability as a new Defect Detection Technique.
2

Optimal use of computing equipment in an automated industrial inspection context

Jubb, Matthew James January 1995 (has links)
This thesis deals with automatic defect detection. The objective was to develop the techniques required by a small manufacturing business to make cost-efficient use of inspection technology. In our work on inspection techniques we discuss image acquisition and the choice between custom and general-purpose processing hardware. We examine the classes of general-purpose computer available and study popular operating systems in detail. We highlight the advantages of a hybrid system interconnected via a local area network and develop a sophisticated suite of image-processing software based on it. We quantitatively study the performance of elements of the TCP/IP networking protocol suite and comment on appropriate protocol selection for parallel distributed applications. We implement our own distributed application based on these findings. In our work on inspection algorithms we investigate the potential uses of iterated function series and Fourier transform operators when preprocessing images of defects in aluminium plate acquired using a linescan camera. We employ a multi-layer perceptron neural network trained by backpropagation as a classifier. We examine the effect on the training process of the number of nodes in the hidden layer and the ability of the network to identify faults in images of aluminium plate. We investigate techniques for introducing positional independence into the network's behaviour. We analyse the pattern of weights induced in the network after training in order to gain insight into the logic of its internal representation. We conclude that the backpropagation training process is sufficiently computationally intensive so as to present a real barrier to further development in practical neural network techniques and seek ways to achieve a speed-up. Weconsider the training process as a search problem and arrive at a process involving multiple, parallel search "vectors" and aspects of genetic algorithms. We implement the system as the mentioned distributed application and comment on its performance.
3

Numerical models for Rayleigh wave scattering from surface features

Blake, Richard John January 1988 (has links)
No description available.
4

The integration of innovative vision and graphic modelling techniques for surface inspection

Smith, Melvyn Lionel January 1998 (has links)
No description available.
5

Towards a self-evolving software defect detection process

Yang, Ximin 15 August 2007
Software defect detection research typically focuses on individual inspection and testing techniques. However, to be effective in applying defect detection techniques, it is important to recognize when to use inspection techniques and when to use testing techniques. In addition, it is important to know when to deliver a product and use maintenance activities, such as trouble shooting and bug fixing, to address the remaining defects in the software.<p>To be more effective detecting software defects, not only should defect detection techniques be studied and compared, but the entire software defect detection process should be studied to give us a better idea of how it can be conducted, controlled, evaluated and improved.<p>This thesis presents a self-evolving software defect detection process (SEDD) that provides a systematic approach to software defect detection and guides us as to when inspection, testing or maintenance activities are best performed. The approach is self-evolving in that it is continuously improved by assessing the outcome of the defect detection techniques in comparison with historical data.<p>A software architecture and prototype implementation of the approach is also presented along with a case study that was conducted to validate the approach. Initial results of using the self-evolving defect detection approach are promising.
6

Towards a self-evolving software defect detection process

Yang, Ximin 15 August 2007 (has links)
Software defect detection research typically focuses on individual inspection and testing techniques. However, to be effective in applying defect detection techniques, it is important to recognize when to use inspection techniques and when to use testing techniques. In addition, it is important to know when to deliver a product and use maintenance activities, such as trouble shooting and bug fixing, to address the remaining defects in the software.<p>To be more effective detecting software defects, not only should defect detection techniques be studied and compared, but the entire software defect detection process should be studied to give us a better idea of how it can be conducted, controlled, evaluated and improved.<p>This thesis presents a self-evolving software defect detection process (SEDD) that provides a systematic approach to software defect detection and guides us as to when inspection, testing or maintenance activities are best performed. The approach is self-evolving in that it is continuously improved by assessing the outcome of the defect detection techniques in comparison with historical data.<p>A software architecture and prototype implementation of the approach is also presented along with a case study that was conducted to validate the approach. Initial results of using the self-evolving defect detection approach are promising.
7

Bridge damage detection and BIM mapping

Huethwohl, Philipp Karl January 2019 (has links)
Bridges are a vitally important part of modern infrastructure. Their condition needs to be monitored on a continuous basis in order to ensure their safety and functionality. Teams of engineers visually inspect more than half a million bridges per year in the US and the EU. There is clear evidence to suggest that they are not able to meet all bridge inspection guideline requirements. In addition, the format and storage of inspection reports varies considerably across authorities because of the lack of standardisation. The availability of a comprehensive and open digital representation of the data involved in and required for bridge inspection is an indispensable necessity for exploiting the full potential of modern digital technologies like big data exploration, artificial intelligence and database technologies. A thorough understanding of bridge inspection information requirements for reinforced concrete bridges is needed as basis for overcoming the stated problem. This work starts with a bridge inspection guideline analysis, from which an information model and a candidate binding to Industry Foundation Classes (IFC) is developed. The resulting bridge model can fully store inspection information in a standardised way which makes it easily shareable and comparable between users and standards. Then, two inspection stages for locating and classifying visual concrete defects are devised, implemented and benchmarked to support the bridge inspection process: In a first stage, healthy concrete surfaces are located and disregarded for further inspection. In a second hierarchical classification stage, each of the remaining potentially unhealthy surface areas is classified into a specific defect type in accordance with bridge inspection guidelines. The first stage achieves a search space reduction for a subsequent defect type classification of over 90% with a risk of missing a defect patch of less than 10%. The second stage identifies the correct defect type to a potentially unhealthy surface area with a probability of 85%. A prototypical implementation serves as a proof of concept. This work closes the gap between requirements arising from established inspection guidelines, the demand for holistic data models which has recently become known as "digital twin", and methods for automatically identifying and measuring specific defect classes on small scale images. It is of great significance for bridge inspectors, bridge owners and authorities as they now have more suitable data models at hand to store, view and manage maintenance information on bridges including defect location and defect types which are being retrieved automatically. With these developments, a foundation is available for a complete revision of bridge inspection processes on a modern, digital basis.
8

Development of Large Array Auto Write-Scan Photoresist Fabrication and Inspection System

Sierchio, Justin Mark January 2014 (has links)
Current metrology methods involve technicians viewing through a microscope, increasing the time, cost, and error rate in inspection. Developing an automated inspection system eliminates these difficulties. Shown in this work is a laser scanning microscope (LSM) design for an opto-electronic detection system (OEDS), based upon the concept that intensity differences related to pattern defects can be obtained from reflections off fused silica samples coated with photoresist (PR) or Aluminum. Development of this system for data collection and processing is discussed. Results show that 2.1 μm resolution of these defects is obtainable. Preliminary results for larger-array patterns through stitching processes are also shown. The second part of this work uses the concept of phase contrast edge detection. Looking at non-metallized patterns, one can use the property that phase changes induced by a refractive-index sensitive material can be seen with a multi-cell array, rendering the image visible by comparing the respective phases. A variety of defects and samples are shown. Extrapolating results to larger arrays is also discussed. Latent imaging, or imaging without development, is also evaluated. Future work in the areas of system commercialization, sample storage, and other mass-printing techniques are discussed.
9

Thermography approaches for building defect detection

Fox, Matthew William January 2016 (has links)
Thermography is one technology, which can be used to detect thermally significant defects in buildings and is traditionally performed using a walk-through methodology. Yet because of limitations such as transient climatic changes, there is a key performance gap between image capture and interpretation. There are however new methodologies currently available, which actively address some of these limitations. By better understanding alternative methodologies, the performance gap can be reduced. This thesis contrasts three thermography methodologies (Walk-through, time-lapse and pass-by) to learn how they deal with limitations and address specific building defects and thermal performance issues. For each approach, practical methodologies were developed and used on laboratory experiments (hot plate) and real dwelling case studies. For the real building studies, 133 dwellings located in Devon and Cornwall (South West England) were studied; this sample represents a broad spectrum of construction types and building ages. Experiments testing these three methodologies found individual strengths and weaknesses for each approach. Whilst traditional thermography can detect multiple defects, characterisation is not always easy to achieve due to the effects of transient changes, which are largely ignored under this methodology. Time-lapse thermography allows the observation of transient changes from which more accurate assessment of defect behaviour can be gained. This is due to improved differentiation between environmental conditions (such as cloud cover and clear sky reflections), actual material thermal behaviour and construction defects. However time-lapse thermography is slow, complex and normally only observes one view. Walk-past thermography is a much faster methodology, inspecting up to 50 dwellings per survey session. Yet this methodology misses many potential defects due to low spatial resolutions, single (external only) elevation inspection and ignoring transient climate and material changes. The implications of these results for building surveying practice clearly indicate that for an improved defect characterisation of difficult to interpret defects such as moisture ingress, thermographers should make use of time-lapse thermography. A review of methodology practicalities illustrates how the need for improved characterisation can be balanced against time and resources when deciding upon the most suitable approach. In order to help building managers and thermographers to decide on the most suitable thermography approach, two strategies have been developed. The first combines different thermography methodologies into a phased inspection program, where spatial and temporal resolution increase with each subsequent thermography inspection. The second provides a decision-making framework to help select the most appropriate thermography methodology for a given scenario or defect.
10

Feature Identification in Wooden Boards Using Color Image Segmentation

Srikanteswara, Srikathyayani 11 September 1998 (has links)
Many different types of features can appear on the surface of wooden boards, lineals or parts. Some of these features should not appear on the surfaces of wood products. These features then become undesirable or removable defects for those products. To manufacture these products boards are cutup in such a way that these undesirable defects will not appear in the final product. Studies have shown that manual cutup of boards does not produce the highest possible yield of final product from rough lumber. Because of this fact a good deal of research work has been done to develop automatic defect detection systems. Color images contain a lot of valuable information which can be used to locate and identify features in wood. This is evidenced by the fact that the human color vision system can accurately locate and identify these features. A very important part of any automatic defect detection system based wholly or impart on color imagery is the location of areas that might contain a wood feature, a feature that depending on the product being manufactured may or may not be a defect. This location process is called image segmentation. While a number of automatic defect detection systems have been proposed that employ color imagery, none of these systems use color imagery to do the segmentation. Rather these systems typically average the red, green, and blue color channels together to form a black and white image. The segmentation operation is then performed on the black and white image. The basic hypothesis of this research is that the use of full color imagery to locate defects will yield better segmentation results than can be obtained when only black and white imagery is used. To approach the color wood image segmentation problem, two conventional clustering procedures were selected for examination. Experiments that were performed clearly showed that these procedures, ones that are similar in flavor to other unsupervised clustering methods, are unsuitable for wood color image segmentation. Based on the experience that was gained in examining the unsupervised clustering procedures, a model based approach is developed. This approach is based on the assumption that the distribution of colors in clear wood is Gaussian. Since boards that are used by the forest products secondary manufacturing industry are all such that most of their surface area is clear wood, the idea is to use the most frequently occurring colors, i.e., the ones that must represent the most likely colors of clear wood, to estimate the mean and covariance of the Normal density function specifying the possible colors of clear wood. Deviations from this model in the observed histogram are used to identify colors that must be caused by features other than clear wood that appear on the surface of the board. / Master of Science

Page generated in 0.0967 seconds