• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 145
  • 37
  • 17
  • 10
  • 8
  • 6
  • 3
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 281
  • 281
  • 82
  • 56
  • 54
  • 49
  • 46
  • 41
  • 38
  • 37
  • 36
  • 28
  • 28
  • 26
  • 24
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
71

Quality Assessment for Halftone Images

Elmèr, Johnny January 2023 (has links)
Halftones are reproductions of images created through the process of halftoning. The goal of halftones is to create a replica of an image which, at a distance, looks nearly identical to the original. Several different methods for producing these halftones are available, three of which are error diffusion, DBS and IMCDP. To check whether a halftone would be perceived as of high quality there are two options: Subjective image quality assessments (IQA’s) and objective image quality (IQ) measurements. As subjective IQA’s often take too much time and resources, objective IQ measurements are preferred. But as there is no standard for which metric should be used when working with halftones, this brings the question of which one to use. For this project both online and on-location subjective testing was performed where observers were tasked with ranking halftoned images based on perceived image quality, the images themselves being chosen specifically to show a wide range of characteristics such as brightness and level of detail. The results of these tests were compiled and then compared to that of eight different objective metrics, the list of which is the following: MSE, PSNR, S-CIELAB, SSIM, BlurMetric, BRISQUE, NIQE and PIQE. The subjective and objective results were compared using Z-scores and showed that SSIM and NIQE were the objective metrics which most closely resembled the subjective results. The online and on-location subjective tests differed greatly for dark colour halftones and colour halftones containing smooth transitions, with a smaller variation for the other categories chosen. What did not change was the clear preference for DBS by both the observers and the objective IQ metrics, making it the better of the three methods tested. / <p>Examensarbetet är utfört vid Institutionen för teknik och naturvetenskap (ITN) vid Tekniska fakulteten, Linköpings universitet</p>
72

Image/video compression and quality assessment based on wavelet transform

Gao, Zhigang 14 September 2007 (has links)
No description available.
73

Quality Measures of Halftoned Images (A Review)

Axelson, Per-Erik January 2003 (has links)
<p>This study is a thesis for the Master of Science degree in Media Technology and Engineering at the Department of Science and Technology, Linkoping University. It was accomplished from November 2002 to May 2003. </p><p>Objective image quality measures play an important role in various image processing applications. In this paper quality measures applied on halftoned images are aimed to be in focus. Digital halftoning is the process of generating a pattern of binary pixels that create the illusion of a continuous- tone image. Algorithms built on this technique produce results of very different quality and characteristics. To evaluate and improve their performance, it is important to have robust and reliable image quality measures. This literature survey is to give a general description in digital halftoning and halftone image quality methods.</p>
74

Quality Measures of Halftoned Images (A Review)

Axelson, Per-Erik January 2003 (has links)
This study is a thesis for the Master of Science degree in Media Technology and Engineering at the Department of Science and Technology, Linkoping University. It was accomplished from November 2002 to May 2003. Objective image quality measures play an important role in various image processing applications. In this paper quality measures applied on halftoned images are aimed to be in focus. Digital halftoning is the process of generating a pattern of binary pixels that create the illusion of a continuous- tone image. Algorithms built on this technique produce results of very different quality and characteristics. To evaluate and improve their performance, it is important to have robust and reliable image quality measures. This literature survey is to give a general description in digital halftoning and halftone image quality methods.
75

Applying multiresolution and graph-searching techniques for boundary detection in biomedical images

Munechika, Stacy Mark, 1961- January 1989 (has links)
An edge-based segmentation scheme (i.e. boundary detector) for nuclear medicine images has been developed and consists of a multiresolutional Gaussian-based edge detector working in conjunction with a modified version of Nilsson's A* graph-search algorithm. A multiresolution technique of analyzing the edge-signature plot (edge gradient versus resolution scale) allows the edge detector to match an appropriately sized edge operator to the edge structure in order to measure the full extent of the edge and thus gain the best compromise between noise suppression and edge localization. The graph-search algorithm uses the output from the multiresolution edge detector as the primary component in a cost function which is then minimized to obtain the boundary path. The cost function can be adapted to include global information such as boundary curvature, shape, and similarity to prototype to help guide the boundary detection process in the absence of good edge information.
76

Sensor modeling and image restoration for a CCD pushbroom imager

Li, Wai-Mo, 1964- January 1987 (has links)
Following the development of detector technology, remote sensing image detection is being implemented with charge-coupled devices (CCD), which have promising features. The French SPOT system is the first civilian satellite sensor employing a CCD in its detection unit. In order to obtain the system transfer function (TF), a linear system model is developed in the across- and along-track directions. The overall system TF, including pixel sampling effects, is then used in the Wiener filter function to derive an optimal restoration function. A restoration line spread function (RLSF) is obtained by the inverse Fourier transform of the Wiener filter and multiplied with a window function. Simulation and empirical tests are described comparing the RLSF to standard kernels used for image resampling for geometric correction. The RLSF results in superior edge enhancement as expected.
77

Étude de l'incidence des coïncidences triples simulées et mesurées à partir de systèmes TEP pixélisés sur les critères de qualité d'image / Study on the incidence of simulated and measured triple coincidences from pixelated PET system on image quality criteria

Clerk-Lamalice, Julien January 2015 (has links)
Résumé : En tomographie d'émission par positrons (TEP), les données sont enregistrées par la détection de paires de photons de haute énergie (511~keV) en coïncidence. Or, dans un système de détection pixélisé, comme celui des scanners LabPET, la diffusion Compton dans les cristaux voisins entraîne la détection fréquente d'événements multiples, présentement rejetés dans le processus de reconstruction des images. Ces événements multiples peuvent augmenter l'efficacité de détection du scanner de façon significative, mais il reste à démontrer que l'inclusion de ces coïncidences peut améliorer la sensibilité sans affecter les critères de qualité des images, tels que la résolution spatiale et le contraste. Le but du projet est de démontrer l'influence de l'inclusion de ces événements dans le processus de reconstruction d'images. Les méthodes à critères fixes seront utilisées pour sélectionner les coïncidences triple obtenues à partir de données simulées à l’aide du logiciel de simulation GATE (« Geant4 Application for Tomographic Emission ») et de mesures réelles effectuées sur les scanners LabPET. // Abstract : In positron emission tomography, data are acquired by detecting pairs of high energy photons (511 keV) in coincidence. Thus, in a highly pixelated system such as the LabPET scanner, Compton diffusion in neighboring crystals can trigger the detection of multiple events (multiple coincidences) which are currently rejected from the reconstruction process. These multiple events can increase significantly the scanner’s detection efficiency, but it remains to be demonstrated that they can be used to increase sensitivity in the images without decreasing image quality criteria, such as the spatial resolution and contrast. The goal of this work is to demonstrate the influence of including these events in the image reconstruction process. Fixed criteria methods were used to select triple coincidences obtained from simulated data from the GATE (Geant4 Application for Tomography Emission) software and real measurements from the LabPET scanner.
78

Novel methods for scatter correction and dual energy imaging in cone-beam CT

Dong, Xue 22 May 2014 (has links)
Excessive imaging doses from repeated scans and poor image quality mainly due to scatter contamination are the two bottlenecks of cone-beam CT (CBCT) imaging. This study investigates a method that combines measurement-based scatter correction and a compressed sensing (CS)-based iterative reconstruction algorithm to generate scatter-free images from low-dose data. Scatter distribution is estimated by interpolating/extrapolating measured scatter samples inside blocked areas. CS-based iterative reconstruction is finally carried out on the under-sampled data to obtain scatter-free and low-dose CBCT images. In the tabletop phantom studies, with only 25% dose of a conventional CBCT scan, our method reduces the overall CT number error from over 220 HU to less than 25 HU, and increases the image contrast by a factor of 2.1 in the selected ROIs. Dual-energy CT (DECT) is another important application of CBCT. DECT shows promise in differentiating materials that are indistinguishable in single-energy CT and facilitates accurate diagnosis. A general problem of DECT is that decomposition is sensitive to noise in the two sets of projection data, resulting in severely degraded qualities of decomposed images. The first study of DECT is focused on the linear decomposition method. In this study, a combined method of iterative reconstruction and decomposition is proposed. The noise on the two initial CT images from separate scans becomes well correlated, which avoids noise accumulation during the decomposition process. To fully explore the benefits of DECT on beam-hardening correction and to reduce the computation cost, the second study is focused on an iterative decomposition method with a non-linear decomposition model for noise suppression in DECT. Phantom results show that our methods achieve superior performance on DECT imaging, with respect to noise reduction and spatial resolution.
79

Υπολογιστική τομογραφία διπλής ενέργειας : Δόση και ποιότητα εικόνας / Dual energy computed tomography : Dose and image quality

Πετρόπουλος, Ανδρέας 26 July 2013 (has links)
Η υπολογιστική τομογραφία διπλής ενέργειας είναι μια σύγχρονη και συνεχώς εξελισσόμενη τεχνική, η οποία ενισχύει την διαφοροποίηση υλικών, βασιζόμενη στις φασματικές τους ιδιότητες. Φασματική απεικόνιση στην υπολογιστική τομογραφία απαιτεί τη χρήση δυο διαφορετικών ενεργειακών φασμάτων, και μπορεί να διαχωρίσει υλικά τα οποία διαφέρουν σημαντικά στον ατομικό τους αριθμό. Για το λόγο αυτό το ιώδιο (Ζ=53), το οποίο χρησιμοποιείται ως σκιαγραφική ουσία, καθώς και το οστό και οι ασβεστώσεις, τα οποία περιέχουν ασβέστιο (Ζ=20) σε μεγάλο ποσοστό, μπορούν να είναι διακριτά από τα υπόλοιπα στοιχεία τα οποία αποτελούν το ανθρώπινο σώμα, όπως υδρογόνο (Ζ=1), οξυγόνο (Ζ=8), άνθρακα (Ζ=6) και άζωτο (Ζ=7), τα οποία είναι υλικά χαμηλού ατομικού αριθμού. Αυτή τη στιγμή υπάρχουν τρεις διαφορετικές τεχνολογίες υπολογιστικής τομογραφίας διπλής ενεργείας. Ο τομογράφος με ανιχνευτή δυο στρωμάτων, ο οποίος χρησιμοποίει μια λυχνία ακτίνων Χ και ένα ανιχνευτή με δύο στρώματα σπινθηρισμού τοποθετημένα το ένα πάνω στο άλλο. Το πάνω στρώμα απορροφά τα μεγαλύτερο μέρος φωτονίων χαμηλής ενέργειας, ενώ το κάτω τα εναπομείναντα φωτόνια υψηλής ενέργειας, κάνοντας λήψη δυο σειρών δεδομένων διαφορετικών ενεργειών ταυτόχρονα. Η δεύτερη τεχνολογική προσέγγιση είναι μέσω ταχύτατης εναλλαγής της τάσης της λυχνίας. Με αυτό τον τρόπο γίνεται λήψη δυο σειρών δεδομένων διαφορετικών ενεργειών, μεταβάλλοντας τη τάση της λυχνίας από χαμηλή σε υψηλή μέσα σε μια μόνο περιστροφή. Τέλος ο τρίτος υπολογιστικός τομογράφος διπλής ενεργείας, ο οποίος χρησιμοποιείται και σε αυτή τη μελέτη, είναι ο τομογράφος δύο λυχνιών, οποίος αποτελείται από δυο λυχνίες ακτίνων Χ και δυο ανιχνευτές. Οι δύο λυχνίες μπορούν να λειτουργήσουν σε διαφορετικά kV ανεξάρτητα η μία από την άλλη, λαμβάνοντας δύο σειρές δεδομένων διαφορετικών ενεργειών ταυτόχρονα. Όταν ο υπολογιστικός τομογράφος δυο λυχνιών χρησιμοποιείται για λήψη εικόνων διπλής ενέργειας, η μια λυχνία λειτουργεί στα 80 kV και η άλλη στα 140 kV. Σε αυτή τη μελέτη εξετάστηκε η συμπεριφορά σε δύο ενέργειες μέσω μια σειράς πειραμάτων, υλικών όπως, πολυμερών ισοδύναμων με μαλακούς ιστούς και οστό, καθώς επίσης, συγκεντρώσεων ιωδίου και ασβεστίου. Χρησιμοποιήθηκαν δυο πρωτόκολλα λήψεων, ένα μιας ενέργειας με λήψεις στα 80, 100, 120, και 140 kV, καθώς και ένα πρωτόκολλο διπλής ενέργειας. Στα πειράματα που πραγματοποιήθηκαν μετρήθηκαν οι αριθμοί CT των υλικών, ο θόρυβος, η αντίθεση και ο λόγος αντίθεσης προς θόρυβο. Επίσης έγινε σύγκριση ως προς τα παραπάνω χαρακτηριστικά ποιότητα εικόνας με βάση τους παραπάνω δείκτες μεταξύ της συμβατικής 120 kV εικόνας και της ανακατασκευασμένης διπλής ενέργειας “virtual 120” kV. Η λεγόμενη “virtual 120” kV, μια αναμεμιγμένη εικόνα, κατασκευασμένη από δυο σειρές δεδομένων διαφορετικών ενεργειών, με γραμμικό συνδυασμό . Επιπλέον διερευνήθηκαν και συγκρίθηκαν ως προς τη ποιότητα εικόνας όλοι οι πιθανοί συνδυασμοί των δυο σειρών δεδομένων ενέργειας. Τα αποτελέσματα έδειξαν ότι μόνο υλικά υψηλού ενεργού ατομικού αριθμού, όπως το οστό και οι υψηλές συγκεντρώσεις ιωδίου 17, 25 και 35 mg/ml, καθώς και ασβεστίου 200, 250 και 300 mg/ml, είχαν ενισχυμένη αντίθεση στα 80 kV. Αξίζει να σημειωθεί ότι για μικρές συγκεντρώσεις ,όπως 1.25, 2.5, 3.5 mg/ml και 45, 83 mg/ml ιωδίου και ασβεστίου αντιστοίχως, η αντίθεση έχει συμπεριφορά μαλακού ιστού. Αντίθετα η τιμή του λόγου αντίθεσης προς θόρυβο δεν είναι όσο υψηλή είναι η τιμή της αντίθεσης. Τα επίπεδα θορύβου της εικόνας στα 80 kV είναι τόσο υψηλά, με αποτέλεσμα οι τιμές του λόγου αντίθεσης προς θόρυβο για όλα τα υλικά υψηλού ατομικού αριθμού να είναι χαμηλότερες στα 80 kV, συγκρινόμενες με τις αντίστοιχες τιμές στις υπόλοιπες τάσεις, παρά το γεγονός ότι η τιμή της αντίθεσης είναι πολύ υψηλή στα 80 kV. Όσο αναφορά τη σύγκριση της 120 kV εικόνας με την λεγόμενη “virtual 120” kV, τα αποτελέσματα των πειραμάτων έδειξαν ότι οι τιμές αντίθεσης του οστού, καθώς επίσης και των συγκεντρώσεων ιωδίου και ασβεστίου, ήταν ισοδύναμες, αλλά η τιμή του λόγου αντίθεσης προς θόρυβο της “virtual 120” kV εικόνας ήταν αρκετά χαμηλότερη σε σχέση με την 120 kV εικόνα. Τέλος το τρίτο πείραμα έδειξε ότι η τιμή της αντίθεσης αυξάνεται όσο αυξάνεται το ποσοστό της 80 kV πληροφορίας στη μεικτή εικόνα, ενώ ο λόγος αντίθεσης προς θόρυβο έχει ένα εύρος συνδυασμών που είναι υψηλός. Συγκεκραμένα οι γραμμικοί συνδυασμοί οι όποιοι είχαν τη μεγαλύτερη τιμή αντιθέσεις προς θόρυβο ήταν οι συντελεστές της 80 kV πληροφορίας από 0.4 έως 0.7. / Dual Energy Computed Tomography (DECT) is an evolving technique, which enhances material differentiation benefiting from the spectral properties of the materials. Spectral CT imaging requires the use of two different energy spectra, and it can distinguish elements, which differ considerably in atomic number. Therefore iodine (Z=53) which is used as contrast agent in CT scans, bone and plaque calcifications which contain calcium (Z=20), can be distinguished from other elements of which the human body consists, such as hydrogen (Z=1), oxygen (Z=8), carbon (Z=6) and nitrogen (Z=7), which are low atomic number elements. Currently there are three technical approaches of dual energy computed tomography. The dual layer detector system, which uses a single x-ray source and a detector with two scintillation layers one on top of one another. The top layer absorbs most of the low energy photons, while the bottom one the remaining high energy photons, acquiring two energy datasets simultaneously. The second technology of dual energy imaging is via fast kVp switching, which acquires two different energy spectra, alternating on a view by view basis between low and high kVp in a single rotation. Finally the third dual energy imaging technique, used in this study, is via the dual source CT system, which contains two x-ray tubes and two detectors. The two tubes can be operated independently at different kV. The dual source CT when it is used for dual energy scan is operated 80 kV/140 kV. Thus two dual energy datasets are acquired simultaneously. In this study the dual energy behavior of soft tissue equivalent materials, bone, iodine and calcium water solutions are examined through a series of experiments. Two acquisition protocols are used, a single energy at 80, 100, 120 and 140 kV, and a dual energy protocol. The CT numbers of these materials, as well as image noise, contrast and contrast to noise ratio are measured. Moreover comparison of these image quality features for standard single energy 120 kV image, which is the convention CT scan, and the “virtual 120” kV image is presented. The “virtual 120” kV is a blended image, reconstructed by the two dual energy datasets in a linear combination of In addition examination of all the possible linear combinations of the two dual energy datasets, and comparison in image quality, is presented. The results showed that only high Zeff materials had enhanced contrast at 80 kV, like bone, and the high iodine and calcium concentrations, such as 17, 25, and 35 mg/ml and 200, 250, and 300 mg/ml respectively. It is noteworthy that for small concentrations, such as 1.25, 2.5, 3.5 mg/ml and 45, 83 mg/ml of iodine and calcium respectively, contrast behavior is like the one of a soft tissue. Contrarily contrast to noise ratio is not as high as contrast at 80 kV. Image noise values at 80 kV are so high that CNR values for all high atomic number materials are lower at 80 kV compared to the ones of other voltages, despite the fact that contrast is very high at 80 kV. As it concerns the comparison of the single energy 120 kV image and the “virtual 120” kV, the results of the experiments showed that contrast values of bone, iodine and calcium concentrations, were equal, but contrast to noise ratio of the “virtual 120” was quite lower compared to the single energy 120 kV. Finally the third experiment showed that contrast values increase as the percentage of the 80 kV datasets increases in the blended image, while contrast to noise ratio has a range in which is higher. Specifically the linear combinations which had the highest CNR values were the ones with weighting factor of the 80 kV starting from 0.4 to 0.7.
80

Computational Tools and Methods for Objective Assessment of Image Quality in X-Ray CT and SPECT

Palit, Robin January 2012 (has links)
Computational tools of use in the objective assessment of image quality for tomography systems were developed for computer processing units (CPU) and graphics processing units (GPU) in the image quality lab at the University of Arizona. Fast analytic x-ray projection code called IQCT was created to compute the mean projection image for cone beam multi-slice helical computed tomography (CT) scanners. IQCT was optimized to take advantage of the massively parallel architecture of GPUs. CPU code for computing single photon emission computed tomography (SPECT) projection images was written calling upon previous research in the image quality lab. IQCT and the SPECT modeling code were used to simulate data for multimodality SPECT/CT observer studies. The purpose of these observer studies was to assess the benefit in image quality of using attenuation information from a CT measurement in myocardial SPECT imaging. The observer chosen for these studies was the scanning linear observer. The tasks for the observer were localization of a signal and estimation of the signal radius. For the localization study, area under the localization receiver operating characteristic curve (A(LROC)) was computed as A(LROC)^Meas = 0.89332 ± 0.00474 and A(LROC)^No = 0.89408 ± 0.00475, where "Meas" implies the use of attenuation information from the CT measurement, and "No" indicates the absence of attenuation information. For the estimation study, area under the estimation receiver operating characteristic curve (A(EROC)) was quantified as A(EROC)^Meas = 0.55926 ± 0.00731 and A(EROC)^No = 0.56167 ± 0.00731. Based on these results, it was concluded that the use of CT information did not improve the scanning linear observer's ability to perform the stated myocardial SPECT tasks. The risk to the patient of the CT measurement was quantified in terms of excess effective dose as 2.37 mSv for males and 3.38 mSv for females.Another image quality tool generated within this body of work was a singular value decomposition (SVD) algorithm to reduce the dimension of the eigenvalue problem for tomography systems with rotational symmetry. Agreement in the results of this reduced dimension SVD algorithm and those of a standard SVD algorithm are shown for a toy problem. The use of SVD toward image quality metrics such as the measurement and null space are also presented.

Page generated in 0.0339 seconds