271 |
Kontrola zobrazení textu ve formulářích / Quality Check of Text in FormsMoravec, Zbyněk January 2017 (has links)
Purpose of this thesis is the quality check of correct button text display on photographed monitors. These photographs contain a variety of image distortions which complicates the following image graphic element recognition. This paper outlines several possibilities to detect buttons on forms and further elaborates on the implemented detection based on contour shapes description. After buttons are found, their defects are detected subsequently. Additionally, this thesis describes an automatic identification of picture with the highest quality for documentation purposes.
|
272 |
Metody potlačení strukturního šumu typu spekle / Speckle noise suppression methods in ultrasound imagesTvarůžek, Marek January 2013 (has links)
This diploma thesis deals with the methods of despeckling in ultrasound images. Ultrasound imaging and related artifacts are described in more details. Ultrasound imaging has its pros and cons, where speckle noise is a disadvantage to be solved. Models of origin of this specific noise are referred too. Practical part of this thesis aims on filtering speckled images by basic and advanced filtering methods as are linear filtering, median filtering, application of Frost filter, QGDCT, geometric filtering, anisotropic diffusion filtering and filtering based on wavelet transformation. Results are compared on the basis of objective criteria.
|
273 |
The impact of defective ultrasound transducers on the evaluation results of ultrasound imaging of blood flow / Effekter av defekta ultraljudsgivare på utvärderingsresultaten av ultraljudtester på blodflödetEghbali, Ladan January 2010 (has links)
Following X-Ray, Ultrasound is now the most common of all the medical imaging technologies specifically in obstetrics and cardiology. Plus that the ultrasound hazards perceived to be insignificant compared with X-rays. Considering the fact that the study of cardiovascular diseases, blood flow patterns and the fetal development is essential for human life, the accuracy and proper functioning of ultrasonic systems is of great importance. Hence quality control of ultrasonic transducers is necessary. In this thesis, a system to standardize the acceptance criteria for quality control of ultrasonic transducers is described. On this ground a study on ultrasound images conducted to compare and evaluate the quality resulted from different types of transducers in different conditions, i.e. defective or functional. A clinical study was also carried out to evaluate our hypothesis in real cases at department of Cardiology and department of genecology. Results from this study show that the perception of quality is somewhat subjective and clinical studies are time-consuming. But quality factors such as the ability to accurately identify anatomical structure and functional capabilities are of great importance and help.
|
274 |
A GENERAL FRAMEWORK FOR CUSTOMER CONTENT PRINT QUALITY DEFECT DETECTION AND ANALYSISRunzhe Zhang (11442742) 11 July 2022 (has links)
<p>Print quality (PQ) is one of the most significant issues with electrophotographic printers. There are many reasons for PQ issues, such as limitations of the electrophotographic process, faulty printer components, or other failures of the print mechanism. These reasons can produce different PQ issues, like streaks, bands, gray spots, text fading, and color fading defects. It is important to analyze the nature and causes of different print defects to more efficiently repair printers and improve the electrophotographic process. </p>
<p><br></p>
<p>We design a general framework for print quality detection and analysis of customer content. This print quality analysis framework inputs the original digital image saved on the computer and then the scanned image. This framework includes two main modules: image pre-processing, print defects feature vector extraction, and classification. The first module, image pre-processing, includes image registration, color calibration, and region of interest (ROI) extraction. The ROI extraction part is designed to extract four different kinds of ROI from the digital master image. Because different ROIs include different print defects, for example, the symbol ROI includes the text fading defect, and the raster ROI includes the color fading defect. The second module includes different ROI print defects detection and analysis algorithms. We classify different ROI print defects using their feature vector based on their severity. This module proposed four important defects detection methods: uniform color area streak detection, symbol ROI color text fading detection, raster ROI color fading detection using a novel unsupervised clustering method, and raster ROI streak detection. We will introduce the details of these algorithms in this thesis. </p>
<p><br></p>
<p>We will also show two other interesting print quality projects: print margin skew detection and print velocity simulation and estimation. Print margin skew detection proposes an algorithm that uses the Hough Lines Detection algorithm to detect printing margin and skew errors based on factual scanned image verification. In the print velocity simulation and estimation project, we propose a print velocity simulation tool, design a specific print velocity test page, and design a print velocity estimation algorithm using the dynamic time warping algorithm. </p>
|
275 |
Photon Counting X-ray Detector SystemsNorlin, Börje January 2005 (has links)
This licentiate thesis concerns the development and characterisation of X-ray imaging detector systems. “Colour” X-ray imaging opens up new perspectives within the fields of medical X-ray diagnosis and also in industrial X-ray quality control. The difference in absorption for different “colours” can be used to discern materials in the object. For instance, this information might be used to identify diseases such as brittle-bone disease. The “colour” of the X-rays can be identified if the detector system can process each X-ray photon individually. Such a detector system is called a “single photon processing” system or, less precise, a “photon counting system”. With modern technology it is possible to construct photon counting detector systems that can resolve details to a level of approximately 50 µm. However with such small pixels a problem will occur. In a semiconductor detector each absorbed X-ray photon creates a cloud of charge which contributes to the picture achieved. For high photon energies the size of the charge cloud is comparable to 50 µm and might be distributed between several pixels in the picture. Charge sharing is a key problem since, not only is the resolution degenerated, but it also destroys the “colour” information in the picture. The problem involving charge sharing which limits “colour” X-ray imaging is discussed in this thesis. Image quality, detector effectiveness and “colour correctness” are studied on pixellated detectors from the MEDIPIX collaboration. Characterisation measurements and simulations are compared to be able to understand the physical processes that take place in the detector. Simulations can show pointers for the future development of photon counting X-ray systems. Charge sharing can be suppressed by introducing 3D-detector structures or by developing readout systems which can correct the crosstalk between pixels.
|
276 |
Image Performance Characterization of an In-Beam Low-Field Magnetic Resonance Imaging System During Static Proton Beam IrradiationGantz, Sebastian, Schellhammer, Sonja M., Hoffmann, Aswin L. 20 January 2023 (has links)
Image guidance using in-beam real-time magnetic resonance (MR) imaging is expected to improve the targeting accuracy of proton therapy for moving tumors, by reducing treatment margins, detecting interfractional and intrafractional anatomical changes and enabling beam gating. The aim of this study is to quantitatively characterize the static magnetic field and image quality of a 0.22T open MR scanner that has been integrated with a static proton research beamline. The magnetic field and image quality studies are performed using high-precision magnetometry and standardized diagnostic image quality assessment protocols, respectively. The magnetic field homogeneity was found to be typical of the scanner used (98ppm). Operation of the beamline magnets changed the central resonance frequency and magnetic field homogeneity by a maximum of 16Hz and 3ppm, respectively. It was shown that the in-beam MR scanner features sufficient image quality and influences of simultaneous irradiation on the images are restricted to a small sequence-dependent image translation (0.1–0.7mm) and a minor reduction in signal-to-noise ratio (1.3%–5.6%). Nevertheless, specific measures have to be taken to minimize these effects in order to achieve accurate and reproducible imaging which is required for a future clinical application of MR integrated proton therapy.
|
277 |
Automated Complexity-Sensitive Image FusionJackson, Brian Patrick January 2014 (has links)
No description available.
|
278 |
Structural Characterization of Fibre Foam Materials Using Tomographic DataSatish, Shwetha January 2024 (has links)
Plastic foams, such as Styrofoam, protect items during transport. Recognising the recycling challenges of these foams, there's a growing interest in developing alternatives from renewable resources, particularly cellulose fibres, for packaging. A deep understanding of its structure, specifically achieving a uniform distribution of small pore sizes, is crucial to enhancing the mechanical properties of the foam. Prior works highlight the need for improvement in X-ray techniques and image-processing techniques to address challenges in data acquisition and analysis. In this study, X-ray Microtomography equipment was used to capture images of the fibre foam sample, and software like XMController and XMReconstructor obtained 2D projection images at different magnifications (2X, 4X, 10X, and 20X). ImageJ and Python algorithms were then used to distinguish pores and fibres from the obtained images and characterize the pores, which included Bilateral filtering, that helped reduce background noise and preserve fibres in the grayscale images. The Threshold Otsu method converted the grayscale image to a binary image, and the inverted binary image aided in Local thickness image formation. The Local thickness image represented fibres with pixel value zero and blown-up spheres of different intensities representing the pores and their characteristics. As the magnification of the Local thickness images increased, the Pore Area, Pore Volume, Pore Perimeter, and Total Pores decreased, indicating a shift towards a more uniform distribution of smaller pores. Histograms, scatter plots, and pore intensity distribution histograms visually represented this trend. Similarly, characteristics like pore density increased, porosity decreased, and specific surface area remained constant with increasing magnification, suggesting a more compact structure. Objective measurements of image quality metrics, such as PSNR, RMSE, SSIM, and NCC, were used. Grayscale images of different magnifications were compared, and it was noted that as the number of projections increased, the 10X vs. 20X and 2X vs. 4X pairs consistently performed well in terms of image quality. The applied methodologies, comprising Pore Analysis and Image Quality Metrics, exhibit significant strengths in characterising porous structures and evaluating image quality. / Plastskum, som frigolit, skyddar föremål under transport. Att känna igenåtervinningsutmaningar för dessa skum, finns det ett växande intresse för att utveckla alternativ frånförnybara resurser, särskilt cellulosafibrer, för förpackningar. En djup förståelse för detstruktur, specifikt att uppnå en enhetlig fördelning av små porstorlekar, är avgörande förförbättring av skummets mekaniska egenskaper. Tidigare arbeten belyser behovet avförbättring av röntgentekniker och bildbehandlingstekniker för att möta utmaningar idatainsamling och analys. I denna studie användes röntgenmikrotomografiutrustning för attta bilder av fiberskumprovet och programvara som XMController ochXMReconstructor erhöll 2D-projektionsbilder med olika förstoringar (2X, 4X, 10X,och 20X). ImageJ och Python-algoritmer användes sedan för att skilja porer och fibrer frånde erhållna bilderna och karakterisera porerna, vilket inkluderade bilateral filtrering, som hjälpteminska bakgrundsbrus och bevara fibrer i gråskalebilderna. The Threshold Otsumetoden konverterade gråskalebilden till en binär bild, och den inverterade binära bilden hjälpte tilli lokal tjocklek bildbildning. Den lokala tjockleksbilden representerade fibrer med pixelvärde noll och uppblåsta sfärer med olika intensitet som representerar porerna och derasegenskaper. När förstoringen av bilderna med lokal tjocklek ökade, ökade porområdet,Porvolym, poromkrets och totala porer minskade, vilket indikerar en förskjutning mot en merjämn fördelning av mindre porer. Histogram, spridningsdiagram och porintensitetsfördelninghistogram representerade visuellt denna trend. På liknande sätt ökade egenskaper som pordensitet,porositeten minskade och den specifika ytarean förblev konstant med ökande förstoring,föreslår en mer kompakt struktur. Objektiva mätningar av bildkvalitetsmått, t.exsom PSNR, RMSE, SSIM och NCC, användes. Gråskalebilder med olika förstoringarjämfördes, och det noterades att när antalet projektioner ökade, 10X vs. 20Xoch 2X vs. 4X par presterade konsekvent bra när det gäller bildkvalitet. Den tillämpademetoder, som omfattar poranalys och bildkvalitetsmått, uppvisar betydandestyrkor i att karakterisera porösa strukturer och utvärdera bildkvalitet.
|
279 |
Compression Based Analysis of Image Artifacts: Application to Satellite ImagesRoman-Gonzalez, Avid 02 October 2013 (has links) (PDF)
This thesis aims at an automatic detection of artifacts in optical satellite images such as aliasing, A/D conversion problems, striping, and compression noise; in fact, all blemishes that are unusual in an undistorted image. Artifact detection in Earth observation images becomes increasingly difficult when the resolution of the image improves. For images of low, medium or high resolution, the artifact signatures are sufficiently different from the useful signal, thus allowing their characterization as distortions; however, when the resolution improves, the artifacts have, in terms of signal theory, a similar signature to the interesting objects in an image. Although it is more difficult to detect artifacts in very high resolution images, we need analysis tools that work properly, without impeding the extraction of objects in an image. Furthermore, the detection should be as automatic as possible, given the quantity and ever-increasing volumes of images that make any manual detection illusory. Finally, experience shows that artifacts are not all predictable nor can they be modeled as expected. Thus, any artifact detection shall be as generic as possible, without requiring the modeling of their origin or their impact on an image. Outside the field of Earth observation, similar detection problems have arisen in multimedia image processing. This includes the evaluation of image quality, compression, watermarking, detecting attacks, image tampering, the montage of photographs, steganalysis, etc. In general, the techniques used to address these problems are based on direct or indirect measurement of intrinsic information and mutual information. Therefore, this thesis has the objective to translate these approaches to artifact detection in Earth observation images, based particularly on the theories of Shannon and Kolmogorov, including approaches for measuring rate-distortion and pattern-recognition based compression. The results from these theories are then used to detect too low or too high complexities, or redundant patterns. The test images being used are from the satellite instruments SPOT, MERIS, etc. We propose several methods for artifact detection. The first method is using the Rate-Distortion (RD) function obtained by compressing an image with different compression factors and examines how an artifact can result in a high degree of regularity or irregularity affecting the attainable compression rate. The second method is using the Normalized Compression Distance (NCD) and examines whether artifacts have similar patterns. The third method is using different approaches for RD such as the Kolmogorov Structure Function and the Complexity-to-Error Migration (CEM) for examining how artifacts can be observed in compression-decompression error maps. Finally, we compare our proposed methods with an existing method based on image quality metrics. The results show that the artifact detection depends on the artifact intensity and the type of surface cover contained in the satellite image.
|
280 |
Diffusion Tensor Imaging Analysis for Subconcussive Trauma in Football and Convolutional Neural Network-Based Image Quality Control That Does Not Require a Big DatasetIkbeom Jang (5929832) 14 May 2019 (has links)
Diffusion Tensor Imaging (DTI) is a magnetic resonance imaging (MRI)-based technique that has frequently been used for the identification of brain biomarkers of neurodevelopmental and neurodegenerative disorders because of its ability to assess the structural organization of brain tissue. In this work, I present (1) preclinical findings of a longitudinal DTI study that investigated asymptomatic high school football athletes who experienced repetitive head impact and (2) an automated pipeline for assessing the quality of DTI images that uses a convolutional neural network (CNN) and transfer learning. The first section addresses the effects of repetitive subconcussive head trauma on the white matter of adolescent brains. Significant concerns exist regarding sub-concussive injury in football since many studies have reported that repetitive blows to the head may change the microstructure of white matter. This is more problematic in youth-aged athletes whose white matter is still developing. Using DTI and head impact monitoring sensors, regions of significantly altered white matter were identified and within-season effects of impact exposure were characterized by identifying the volume of regions showing significant changes for each individual. The second section presents a novel pipeline for DTI quality control (QC). The complex nature and long acquisition time associated with DTI make it susceptible to artifacts that often result in inferior diagnostic image quality. We propose an automated QC algorithm based on a deep convolutional neural network (DCNN). Adaptation of transfer learning makes it possible to train a DCNN with a relatively small dataset in a short time. The QA algorithm detects not only motion- or gradient-related artifacts, but also various erroneous acquisitions, including images with regional signal loss or those that have been incorrectly imaged or reconstructed.
|
Page generated in 0.0676 seconds