• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 11
  • 3
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 28
  • 8
  • 6
  • 5
  • 5
  • 4
  • 4
  • 4
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Enhanced image analysis, a tool for precision metrology in the micro and macro world

Daemi, Bita January 2017 (has links)
The need for high speed and cost efficient inspection in manufacturing lineshas led to a vast usage of camera-based vision systems. The performance ofthese systems is sufficient to determine shape and size, but hardly to an accuracylevel comparable with traditional metrology tools. To achieve highprecision shape/position/defect measurements, the camera techniques haveto be combined with high performance image metrology techniques whichare developed and adapted to the manufactured components. The focus ofthis thesis is the application of enhanced image analysis as a tool for highprecision metrology. Dedicated algorithms have been developed, tested andevaluated in three practical cases ranging from micro manufacturing at submicronprecision to meter sized aerospace components with precision requirementsin the 10 μm range.The latter measurement challenge was solved by low cost standard consumerproducts, i.e. digital cameras in a stereo configuration and structured lightfrom a gobo-projector. Combined with high-precision image analysis and anew approach in camera calibration and 3D reconstruction for precise 3Dshape measurement of meter sized surfaces, the achievement was fulfilledand verified by two conventional measurement systems; a high precisioncoordinate measurement machine and a laser scanner.The sub-micron challenge was the implementation of image metrology forverification of micro manufacturing installations within a joint Europeaninfrastructure network, EUMINAfab. The results were an unpleasant surprisefor some of the participating laboratories, but became a big step forwardto improve the dimensional accuracy of the investigated laser micromachining, micro milling and micro-printing systems, since the accuracy ofthese techniques are very difficult to assess.The third high precision metrology challenge was the measurement of longrange,low-amplitude topographic structures on specular (shiny) aerodynamicsurfaces. In this case Fringe Reflection Technique (FRT) was appliedand image analysis algorithms were used to evaluate the fringe deformationas a measure of the surface slopes to obtain high resolution data. The resultwas compared with an interferometric analysis showing height deviation inthe range of tens of micrometers over a lateral extension of several cm. / <p>QC 20170523</p> / LOCOMACHS / EUMINAfab / Cleansky
12

Controlling the speed of film with high precision in a line scanner / Styrning av filmhastighet med hög precision i linjescanner

Rosenius, Magnus January 2003 (has links)
<p>In this master thesis, a system has been designed that is used to detect the perforation holes on a film in a line-scanning film scanner. The film scanner is used to scan regular film taken by high-speed cameras during tests of for example missile launches or vehicle crash tests. </p><p>The system consists of a PLD that detects the perforation holes on the film using a signal from a digital line-scanning CCD camera. A main issue has been to make the detection procedure robust and independent of the different types of films encountered in real life situations. </p><p>The result from the detection is used to generate control signals to the film speed regulation mechanism inside the film scanner that then regulates the velocity of the film. To make the detection and regulation more sensitive, a part-of-line precision has been developed to calculate where, inside a line, the actual hole is positioned. </p><p>The system has been programmed in VHDL, synthesized, implemented and fitted into a Xilinx Spartan (XCS10-3-PC84) Field Programmable Gate Array (FPGA). The implementation has been simulated but not in real hardware.</p>
13

Controlling the speed of film with high precision in a line scanner / Styrning av filmhastighet med hög precision i linjescanner

Rosenius, Magnus January 2003 (has links)
In this master thesis, a system has been designed that is used to detect the perforation holes on a film in a line-scanning film scanner. The film scanner is used to scan regular film taken by high-speed cameras during tests of for example missile launches or vehicle crash tests. The system consists of a PLD that detects the perforation holes on the film using a signal from a digital line-scanning CCD camera. A main issue has been to make the detection procedure robust and independent of the different types of films encountered in real life situations. The result from the detection is used to generate control signals to the film speed regulation mechanism inside the film scanner that then regulates the velocity of the film. To make the detection and regulation more sensitive, a part-of-line precision has been developed to calculate where, inside a line, the actual hole is positioned. The system has been programmed in VHDL, synthesized, implemented and fitted into a Xilinx Spartan (XCS10-3-PC84) Field Programmable Gate Array (FPGA). The implementation has been simulated but not in real hardware.
14

Hydrologic and Ecological Effects of Watershed Urbanization: Implication for Watershed Management in Hillslope Regions

Sung, Chan Yong 2010 May 1900 (has links)
In this study, I examined the effect of watershed urbanization on the invasion of alien woody species in riparian forests. This study was conducted in three major steps: 1) estimating the degree of watershed urbanization using impervious surface maps extracted from remote sensing images; 2) examining the effect of urbanization on hydrologic regime; and 3) investigating a relationship between watershed urbanization and ecosystem invasibility of a riparian forest. I studied twelve riparian forests along urban-rural gradients in Austin, Texas. Hydrologic regimes were quantified by transfer function (TF) models using four-year daily rainfall-streamflow data in two study periods (10/1988-09/1992 and 10/2004-09/2008) between which Austin had experienced rapid urbanization. For each study period, an impervious surface map was generated from Landsat TM image by a support vector machine (SVM) with pairwise coupling. SVM more accurately estimated impervious surface than other subpixel mapping methods. Ecosystem invasibilities were assessed by relative alien cover (RAC) of riparian woody species communities. The results showed that the effects of urbanization differ by hydrogeologic conditions. Of the study watersheds, seven located in a hillslope region experienced the diminishing peakflows between the two study periods, which are contrary to current urban hydrologic model. I attributed the decreased peakflows to land grading that transformed a hillslope into a stair-stepped landscape. In the rest of the watersheds, peakflow diminished between the two study periods perhaps due to the decrease in stormwater infiltration and groundwater pumpage that lowered groundwater level. In both types of watersheds, streamflow rising during a storm event more quickly receded as watershed became more urbanized. This study found a positive relationship between RAC and watershed impervious surface percentage. RAC was also significantly related to flow recession and canopy gap percentages, both of which are indicators of hydrologic disturbance. These results suggest that urbanization facilitated the invasion of alien species in riparian forests by intensifying hydrologic disturbance. The effects of urbanization on ecosystems are complex and vary by local hydrologeologic conditions. These results imply that protection of urban ecosystems should be based on a comprehensive and large-scale management plan.
15

Analysis, comparison and modification of various Particle Image Velocimetry (PIV) algorithms

Estrada Perez, Carlos Eduardo 17 February 2005 (has links)
A program based on particle tracking velocimetry (PTV) was developed in this work. The program was successfully validated by means of artificial images where parameters such as radius, concentration, and noise were varied in order to test their influence on the results. This program uses the mask cross correlation technique for particle centroid location. The sub-pixel accuracy is achieved using two different methods, the three point Gaussian interpolation method and the center of gravity method. The second method is only used if the first method fails. The object matching algorithm between frames uses cross correlation with a non binarized image. A performance comparison between different particle image velocimetry (PIV) and PTV algorithms was done using the international standard PIV challenge artificial images. The best performance was obtained by the program developed in this work. It showed the best accuracy, and the best spatial resolution by finding the larger number of correct vectors of all algorithm tested. A procedure is proposed to obtain error estimates for real images based on errors calculated with experimental ones. Using this procedure a real PIV image with 20% noise has an estimated average error of 0.1 pixel. Results of the analysis of 200 experimental images are shown for the two best PTV algorithms.
16

Subpixel Image Co-Registration Using a Novel Divergence Measure

Wisniewski, Wit Tadeusz January 2006 (has links)
Sub-pixel image alignment estimation is desirable for co-registration of objects in multiple images to a common spatial reference and as alignment input to multi-image processing. Applications include super-resolution, image fusion, change detection, object tracking, object recognition, video motion tracking, and forensics.Information theoretical measures are commonly used for co-registration in medical imaging. The published methods apply Shannon's Entropy to the Joint Measurement Space (JMS) of two images. This work introduces into the same context a new set of statistical divergence measures derived from Fisher Information. The new methods described in this work are applicable to uncorrelated imagery and imagery that becomes statistically least dependent upon co-alignment. Both characteristics occur with multi-modal imagery and cause cross-correlation methods, as well as maximum dependence indicators, to fail. Fisher Information-based estimators, together as a set with an Entropic estimator, provide substantially independent information about alignment. This increases the statistical degrees of freedom, allowing for precision improvement and for reduced estimator failure rates compared to Entropic estimator performance alone.The new Fisher Information methods are tested for performance on real remotely-sensed imagery that includes Landsat TM multispectral imagery and ESR SAR imagery, as well as randomly generated synthetic imagery. On real imagery, the co-registration cost function is qualitatively examined for features that reveal the correct point of alignment. The alignment estimates agree with manual alignment to within manual alignment precision. Alignment truth in synthetic imagery is used to quantitatively evaluate co-registration accuracy. The results from the new Fisher Information-based algorithms are compared to Entropy-based Mutual Information and correlation methods revealing equal or superior precision and lower failure rate at signal-to-noise ratios below one.
17

Model Agnostic Extreme Sub-pixel Visual Measurement and Optimal Characterization

January 2012 (has links)
abstract: It is possible in a properly controlled environment, such as industrial metrology, to make significant headway into the non-industrial constraints on image-based position measurement using the techniques of image registration and achieve repeatable feature measurements on the order of 0.3% of a pixel, or about an order of magnitude improvement on conventional real-world performance. These measurements are then used as inputs for a model optimal, model agnostic, smoothing for calibration of a laser scribe and online tracking of velocimeter using video input. Using appropriate smooth interpolation to increase effective sample density can reduce uncertainty and improve estimates. Use of the proper negative offset of the template function has the result of creating a convolution with higher local curvature than either template of target function which allows improved center-finding. Using the Akaike Information Criterion with a smoothing spline function it is possible to perform a model-optimal smooth on scalar measurements without knowing the underlying model and to determine the function describing the uncertainty in that optimal smooth. An example of empiric derivation of the parameters for a rudimentary Kalman Filter from this is then provided, and tested. Using the techniques of Exploratory Data Analysis and the "Formulize" genetic algorithm tool to convert the spline models into more accessible analytic forms resulted in stable, properly generalized, KF with performance and simplicity that exceeds "textbook" implementations thereof. Validation of the measurement includes that, in analytic case, it led to arbitrary precision in measurement of feature; in reasonable test case using the methods proposed, a reasonable and consistent maximum error of around 0.3% the length of a pixel was achieved and in practice using pixels that were 700nm in size feature position was located to within ± 2 nm. Robust applicability is demonstrated by the measurement of indicator position for a King model 2-32-G-042 rotameter. / Dissertation/Thesis / Measurement Results (part 1) / Measurement Results (part 2) / General Presentation / M.S. Mechanical Engineering 2012
18

Robust Real-Time Estimation of Region Displacements in Video Sequences

Skoglund, Johan January 2007 (has links)
The possibility to use real-time computer vision in video sequences gives many opportunities for a system to interact with the environment. Possible ways for interaction are e.g. augmented reality like in the MATRIS project where the purpose is to add new objects into the video sequence, or surveillance where the purpose is to find abnormal events. The increase of the speed of computers the last years has simplified this process and it is now possible to use at least some of the more advanced computer vision algorithms that are available. The computational speed of computers is however still a problem, for an efficient real-time system efficient code and methods are necessary. This thesis deals with both problems, one part is about efficient implementations using single instruction multiple data (SIMD) instructions and one part is about robust tracking. An efficient real-time system requires efficient implementations of the used computer vision methods. Efficient implementations requires knowledge about the CPU and the possibilities given. In this thesis, one method called SIMD is explained. SIMD is useful when the same operation is applied to multiple data which usually is the case in computer vision, the same operation is executed on each pixel. Following the position of a feature or object in a video sequence is called tracking. Tracking can be used for a number of applications. The application in this thesis is to use tracking for pose estimation. One way to do tracking is to cut out a small region around the feature, creating a patch and find the position on this patch in the other frames. To find the position, a measure of the difference between the patch and the image in a given position is used. This thesis thoroughly investigates the sum of absolute difference (SAD) error measure. The investigation involves different ways to improve the robustness and to decrease the average error. One method to estimate the average error, the covariance of the position error is proposed. An estimate of the average error is needed when different measurements are combined. Finally, a system for camera pose estimation is presented. The computer vision part of this system is based on the result in this thesis. This presentation contains also a discussion about the result of this system. / Report code: LIU-TEK-LIC-2007:5. The report code in the thesis is incorrect.
19

Sledování malých změn objektů / Detection of Little Changes

Čírtek, Jiří January 2008 (has links)
This diploma thesis inspects problems with specification location of edges with higher accuracy then one pixel (subpixel accuracy). In terms of this assignment has been created program, which generates three different shapes of objects. With change of parameters in program is measuring location of gravitational center on objects with subpixel accuracy. Obtained data of gravitational center deviations are depictured in graphs.
20

Sub-pixel Registration In Computational Imaging And Applications To Enhancement Of Maxillofacial Ct Data

Balci, Murat 01 January 2006 (has links)
In computational imaging, data acquired by sampling the same scene or object at different times or from different orientations result in images in different coordinate systems. Registration is a crucial step in order to be able to compare, integrate and fuse the data obtained from different measurements. Tomography is the method of imaging a single plane or slice of an object. A Computed Tomography (CT) scan, also known as a CAT scan (Computed Axial Tomography scan), is a Helical Tomography, which traditionally produces a 2D image of the structures in a thin section of the body. It uses X-ray, which is ionizing radiation. Although the actual dose is typically low, repeated scans should be limited. In dentistry, implant dentistry in specific, there is a need for 3D visualization of internal anatomy. The internal visualization is mainly based on CT scanning technologies. The most important technological advancement which dramatically enhanced the clinician's ability to diagnose, treat, and plan dental implants has been the CT scan. Advanced 3D modeling and visualization techniques permit highly refined and accurate assessment of the CT scan data. However, in addition to imperfections of the instrument and the imaging process, it is not uncommon to encounter other unwanted artifacts in the form of bright regions, flares and erroneous pixels due to dental bridges, metal braces, etc. Currently, removing and cleaning up the data from acquisition backscattering imperfections and unwanted artifacts is performed manually, which is as good as the experience level of the technician. On the other hand the process is error prone, since the editing process needs to be performed image by image. We address some of these issues by proposing novel registration methods and using stonecast models of patient's dental imprint as reference ground truth data. Stone-cast models were originally used by dentists to make complete or partial dentures. The CT scan of such stone-cast models can be used to automatically guide the cleaning of patients' CT scans from defects or unwanted artifacts, and also as an automatic segmentation system for the outliers of the CT scan data without use of stone-cast models. Segmented data is subsequently used to clean the data from artifacts using a new proposed 3D inpainting approach.

Page generated in 0.0948 seconds