• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 284
  • 281
  • 216
  • 64
  • 59
  • 34
  • 15
  • 14
  • 11
  • 11
  • 10
  • 10
  • 5
  • 5
  • 3
  • Tagged with
  • 1216
  • 189
  • 184
  • 181
  • 171
  • 170
  • 168
  • 160
  • 159
  • 157
  • 154
  • 152
  • 150
  • 149
  • 147
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
371

Guided Wave Inspection of Pipes Using Electromagnetic Acoustic Transducers

Vasiljevic, Milos January 2007 (has links)
This research covers modeling of Electro Magnetic Acoustic Transducers (EMATs) and their application in excitation and detection of longitudinal guided Lamb wave modes for evaluation of flaws in cylindrical pipes. The combination of the configuration of transducers and the frequency of the input current is essential for successful excitation of desired guided wave modes and for proper interpretation of the results. In this study EMATs were successfully constructed and longitudinal modes L(0,1) and L(0,2) were excited in the pipe. From the recorded signals the level of simulated damage in pipes could be assessed. It is also possible to theoretically predict the location of the pipe flaws. Theoretical predictions are matched with experimental results. Dents and holes in pipes are detected by appropriate signal processing of received L(0,1) and L(0,2) modes.
372

Closing the Defect Reduction Gap between Software Inspection and Test-Driven Development: Applying Mutation Analysis to Iterative, Test-First Programming

Wilkerson, Jerod W. January 2008 (has links)
The main objective of this dissertation is to assist in reducing the chaotic state of the software engineering discipline by providing insights into both the effectiveness of software defect reduction methods and ways these methods can be improved. The dissertation is divided into two main parts. The first is a quasi-experiment comparing the software defect rates and initial development costs of two methods of software defect reduction: software inspection and test-driven development (TDD). Participants, consisting of computer science students at the University of Arizona, were divided into four treatment groups and were asked to complete the same programming assignment using either TDD, software inspection, both, or neither. Resulting defect counts and initial development costs were compared across groups. The study found that software inspection is more effective than TDD at reducing defects, but that it also has a higher initial cost of development. The study establishes the existence of a defect-reduction gap between software inspection and TDD and highlights the need to improve TDD because of its other benefits.The second part of the dissertation explores a method of applying mutation analysis to TDD to reduce the defect reduction gap between the two methods and to make TDD more reliable and predictable. A new change impact analysis algorithm (CHA-AS) based on CHA is presented and evaluated for applications of software change impact analysis where a predetermined set of program entry points is not available or is not known. An estimated average case complexity analysis indicates that the algorithm's time and space complexity is linear in the size of the program under analysis, and a simulation experiment indicates that the algorithm can capitalize on the iterative nature of TDD to produce a cost savings in mutation analysis applied to TDD projects. The algorithm should also be useful for other change impact analysis situations with undefined program entry points such as code library and framework development.An enhanced TDD method is proposed that incorporates mutation analysis, and a set of future research directions are proposed for developing tools to support mutation analysis enhanced TDD and to continue to improve the TDD method.
373

Approche multi-énergies associée à un détecteur spectrométrique rayons X pour l'identification de matériaux

Beldjoudi, Guillaume 19 September 2011 (has links) (PDF)
Le développement des détecteurs de rayons X en comptage à base de semiconducteurs est en plein essor depuis une dizaine d'années, et des applications aussi bien dans le domaine médical que dans le domaine du contrôle non destructif sont envisagées. Ces détecteurs permettent en effet de réaliser des mesures à des énergies multiples en une seule acquisition, et ce avec une excellente séparation énergétique. Depuis les années 2008-2009, il semble qu'une véritable course se soit lancée pour le développement de détecteur permettant des mesures multi-énergies sur un nombre toujours plus nombreux de bandes d'énergies. Cependant, à ce jour, parmi l'ensemble des travaux qui ont été réalisés, l'intérêt de réaliser des mesures sur un grand nombre d'énergies n'a pas été démontré pour l'identification de matériaux. Dans le cadre d'une étude en sécurité, nous avons évalué l'intérêt lié à l'utilisation de détecteurs de rayons X en comptage permettant la réalisation de mesures sur plusieurs bandes d'énergies. Le domaine applicatif étudié concerne l'identification de matériaux dans les bagages des voyageurs. Nous avons tout d'abord développé une méthode originale d'identification de matériaux homogènes applicable à tout type de détecteur multi-énergies. Dans un premier temps, nous avons étudié, en simulation, l'évolution des performances d'identification de matériaux avec l'augmentation du nombre de bandes d'énergies de comptage. Un processus d'optimisation a été réalisé dans le but de déterminer, pour certaines configurations, une géométrie optimale des bandes d'énergies de comptage. Dans un second temps, les conséquences résultant de la prise en compte de la fonction de réponse du détecteur ont été quantifiées par la simulation de différents effets détecteurs (partage de charge, résolution en énergie). Une validation expérimentale a enfin pu être effectuée en utilisant un détecteur spectrométrique en comptage. À partir des mesures réalisées avec un tel détecteur, un regroupement des données nous a permis d'évaluer les performances d'identification de détecteurs possédant un nombre de bandes d'énergies de comptage différent. Enfin, nous avons mené une étude préliminaire sur la transposition à la tomographie multi-énergies de la méthode d'identification de matériaux homogènes développée initialement en radiographie. Cette modalité d'imagerie permet alors l'identification de matériaux superposés.
374

Mokestinio tyrimo ir mokestinio patikrinimo sąveika / Interplay between taxable investigation and taxable inspection

Bagdonas, Šarūnas 16 January 2007 (has links)
Currently two direct control forms of taxpayers - the taxable investigation and taxable inspection - are applied in Lithuania. How the taxpayers calculate, declare and pay taxes is analyzed and verified during the taxable investigation and taxable inspection. During the taxable investigation is determined whether the taxable inspection will be laid or not. Taxable investigation and taxable inspection are two different forms of taxable control. The taxable inspection is more formal and stricter procedure where the tax administrator follows the law and verifies the correctness of calculated, declared and paid taxes. When law violations are determined, the additional taxes and economical sanctions will be applied. The taxable investigation is the supervision and analysis of taxpayers’ activity, as well collection of additional information, related with the activity of the taxpayer.
375

Model-based visual inspection of hybrid circuits

Blais, Bruno January 1987 (has links)
No description available.
376

CHARACTERIZATION OF SEED DEFECTS IN HIGHLY SPECULAR SMOOTH COATED SURFACES

GNANAPRAKASAM, PRADEEP 01 January 2004 (has links)
Many smooth, highly specular coatings such as automotive paints are subjected to considerable performance demands as the customer expectations for appearance of coatings are continually increasing. Therefore it is vital to develop robust methods to monitor surface quality online. An automated visual assessment of specular coated surface that would not only provide a cost effective and reliable solution to the industries but also facilitate the implementation of a real-time feedback loop. The scope of this thesis is a subset of the inspection technology that facilitates real-time close loop control of the surface quality and concentrates on one common surface defect the seed defect. This machine vision system design utilizes surface reflectance models as a rational basis. Using a single high-contrast image the height of the seed defect is computed; the result is obtained rapidly and is reasonably accurate approximation of the actual height.
377

Module property verification : A method to plan and perform quality verifications in modular architectures

Kenger, Patrik January 2006 (has links)
Modular product architectures have generated numerous benefits for companies in terms of cost, lead-time and quality. The defined interfaces and the module’s properties decrease the effort to develop new product variants, and provide an opportunity to perform parallel tasks in design, manufacturing and assembly. The background of this thesis is that companies perform verifications (tests, inspections and controls) of products late, when most of the parts have been assembled. This extends the lead-time to delivery and ruins benefits from a modular product architecture; specifically when the verifications are extensive and the frequency of detected defects is high. Due to the number of product variants obtained from the modular product architecture, verifications must handle a wide range of equipment, instructions and goal values to ensure that high quality products can be delivered. As a result, the total benefits from a modular product architecture are difficult to achieve. This thesis describes a method for planning and performing verifications within a modular product architecture. The method supports companies by utilizing the defined modules for verifications already at module level, so called MPV (Module Property Verification). With MPV, defects are detected at an earlier point, compared to verification of a complete product, and the number of verifications is decreased. The MPV method is built up of three phases. In Phase A, candidate modules are evaluated on the basis of costs and lead-time of the verifications and the repair of defects. An MPV-index is obtained which quantifies the module and indicates if the module should be verified at product level or by MPV. In Phase B, the interface interaction between the modules is evaluated, as well as the distribution of properties among the modules. The purpose is to evaluate the extent to which supplementary verifications at product level is needed. Phase C supports a selection of the final verification strategy. The cost and lead-time for the supplementary verifications are considered together with the results from Phase A and B. The MPV method is based on a set of qualitative and quantitative measures and tools which provide an overview and support the achievement of cost and time efficient company specific verifications. A practical application in industry shows how the MPV method can be used, and the subsequent benefits / <p>QC 20100906</p>
378

Listeria monocytogenes and Ready-to-Eat Meats: Tackling a Wicked Problem using Grounded Theory

Rebellato, Steven 16 November 2012 (has links)
Background: Listeria monocytogenes and ready-to-eat meats have garnered considerable attention in Canada over the past decade as a result of foodborne outbreaks and product recalls that continue to transpire. A number of factors suggest that ready-to-eat meats and Listeria monocytogenes are a wicked problem. They include (among others) the number of stakeholders involved in the processing, distribution and inspection of ready-to-eat meats in Ontario, the ubiquitous and hardy nature of the organism and the challenges associated with eliminating it from ready-to-eat meat products and processing environments. Since Ontario public health units play an integral part in the inspection of ready-to-eat meats in the province, it is important to determine their current role in the wicked problem in order to identify possible solutions for change. Purpose: The purposes of the study were: (1) to determine how Ontario public health units address the wicked problem of Listeria monocytogenes and ready-to-eat meats in their food safety inspection programs using the provincial regulatory framework in addition to the use of research, knowledge translation and innovation; and (2) to develop a theory that identifies gaps (if any) in public health unit inspection practices, provincial legislation or food safety research that serves to generate recommendations to reduce incidence of listeriosis resulting from consumption of RTE meat products. Methodology: The research design used the principles of grounded theory to lead the interview and survey methodology and subsequent data analyses. The study was completed in three phases. Interviews were conducted in the first 2 phases of the study while a survey was conducted in the last phase. Interviews were conducted with public health unit ‘food safety leads’ that met pre-determined eligibility criteria. Following methods used in previous studies,interview data were analyzed in 4 stages of theory development using a grounded theory approach. Through substantive coding and constant comparative methods, core categories were identified in each of the study phases. As a result, theoretical saturation was reached leading to the process of theoretical coding and the emergence of the study theory. Results: In total, 27 public health units of 36 participated in the study. Eleven public health units participated in the first 2 phases of the interviews while 25 public health units (for a total of 45 participants) participated in the survey. The study core category, 'reactive and regulatory practice' evolved from the results of the interviews and survey. As a result, it was determined that: (1) the Ontario provincial regulatory framework including the Food Premises Regulation is almost exclusively responsible for directing food safety inspection practices in food premises; (2) food safety inspection and investigation activities associated with listeriosis outbreaks are the focus of Listeria monocytogenes and ready-to-eat meat research; and (3) innovation and knowledge translation are not currently influenced by inspection practice as a result of the food safety framework which does not require or encourage it. Using the processes of theoretical integration and theoretical coding, the following theory emerged from the data analyses; Ontario public health units manage ready-to-eat meats and Listeria monocytogenes through general population and reactive regulatory processes that focus on local-level, end-product, hazard reduction strategies for established risks in inspected food premises. Strengths and Limitations: The study had several strengths including being the first of its kind to associate ready-to-eat meats and Listeria monocytogenes as a part of a wicked problem. It was also the first study to use grounded theory to illuminate the function and role of Ontario public health units in managing Listeria monocytogenes and ready-to-eat meats. There are a number of limitations to the study including the study sample size, participant inclusion process through provincial public health unit senior management, the generalizability of study results, and method of interviews conducted with participants. Implications: The results of the study have implications for public health researchers and policy/regulatory makers in the province of Ontario. It stresses improved management of Listeria monocytogenes and ready-to-eat meats in food premises using a proactive approach. Conclusions: Using a grounded theory approach, this study demonstrated that Ontario public health units manage ready-to-eat meats and Listeria monocytogenes through reactive and regulatory food safety inspection practices. Survey and interview results indicate that study participants aspire for evidence-based regulatory and program amendments that will allow for proactive and targeted microbial risk-reduction activities at the local level that focus on vulnerable populations. The study substantiates that amendments to the Ontario Food Safety program and in particular, the Food Premises Regulation are necessary.
379

An automated visual inspection system for bare hybrid boards /

Eskenazi, Cem. January 1985 (has links)
No description available.
380

Visual Inspection Of Pharmaceutical Color Tablets

Akturk, Deniz 01 May 2006 (has links) (PDF)
In this work a machine vision system for inspecting pharmaceutical color tablets is presented and implemented. Nonparametric clustering based segmentation is faster and thus more appropriate for real-time applications. Two nonparametric clustering based methods, Nearest Neighbor algorithm and MaxShift algorithm are worked in RGB and HSV color spaces as the segmentation step. The implemented algorithm allows the system to detect the missing and broken tablets, tablet fragments, and the color, size, and shape of individual tablets in pharmaceutical blisters, in real-time. System has two operation modes called &amp / #8216 / &amp / #8216 / training&amp / #8217 / &amp / #8217 / and &amp / #8216 / &amp / #8216 / inspection&amp / #8217 / &amp / #8217 / mode, respectively. Operator selects one point on any tablet in a defect-free training captured image in the &amp / #8216 / &amp / #8216 / training&amp / #8217 / &amp / #8217 / mode. In the correction step an optimization algorithm is required, for which Powell and Downhill Simplex methods are used. Captured image is then corrected for spatial color nonuniformity, segmented, and the position, size, shape, and color of each tablet are extracted in the training mode. The correction and segmentation models / the extracted features generated in the training mode is saved with the user defined values to form the model. Each acquired image in the inspection mode is corrected and segmented according to the blister model and then the blisters are classified as &amp / #8216 / &amp / #8216 / good&amp / #8217 / &amp / #8217 / or &amp / #8216 / &amp / #8216 / bad&amp / #8217 / &amp / #8217 / by comparing the extracted feature values with the user defined tolerances stored in the blister model.

Page generated in 0.1091 seconds