• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 144
  • 37
  • 17
  • 10
  • 8
  • 6
  • 3
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 279
  • 279
  • 82
  • 55
  • 54
  • 48
  • 46
  • 41
  • 38
  • 36
  • 36
  • 28
  • 28
  • 26
  • 24
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
51

Μελέτη του ποσοστού νεφοκάλυψης στην περιοχή του όρους Χελμός για τη βελτιστοποίηση της ποιότητας εικόνας του νέου ελληνικού τηλεσκοπίου Αρίσταρχος

Γαλανάκης, Νικόλαος 04 August 2009 (has links)
Η παρούσα εργασία χωρίζεται σε δύο μέρη. Το πρώτο έχει ως κύριο σκοπό τον υπολογισμό του ποσοστού νεφοκάλυψης στην περιοχή του τηλεσκοπίου Αρίσταρχος για την βελτιστοποίηση της ποιότητας εικόνας. Στο πρώτο κεφάλαιο δίνεται μια συνοπτική περιγραφή του νέου ελληνικού τηλεσκοπίου Αρίσταρχος και των οργάνων που αυτό χρησιμοποιεί. Στη συνέχεια, στο δεύτερο κεφάλαιο, περιγράφεται ο τρόπος που πραγματοποιούνται οι αστρονομικές παρατηρήσεις και οι δυσκολίες που παρουσιάζονται σε αυτές. Το κύριο βάρος δίνεται στις ατμοσφαιρικές διαταραχές λόγω της τύρβης της ατμόσφαιρας και της διαφορετικής πυκνότητας που αυτή παρουσιάζει στα διάφορα στρώματά της. Στις τελευταίες παραγράφους του κεφαλαίου υπάρχει μια συνοπτική εικόντα του τρόπου με τον οποίο πραγματοποιούνται οι φωτομετρικές μετρήσεις καθώς και η χρησιμότητά τους. Στο τρίτο κεφάλαιο υπάρχει αναλυτική παρουσίαση των δεδομένων που λήφθησαν με τη βοήθεια του δορυφόρου MeteoSAT 7. Στους πίνακες που περιέχει παρατίθεται αναλυτικά η νεφοκάλυψη για τις περιοχές του Αρίσταρχου και της Πεντέλης. Τα δεδομένα αναφέρονται στα έτη 2004 έως και 2007 ανά μήνα και ανά έξι ώρες, στις 00:00 UTC, στις 06:00 UTC και στις 18:00 UTC. Στο τέταρτο κεφάλαιο υπάρχει αναλυτική μελέτη των δεδομένων του τρίτου κεφαλαίου και υπολογισμός του ποσοστού νεφοκάλυψης για τις περιοχές του Αρίσταρχου και της Πεντέλης, ανά ώρα, ανά μήνα, ανά έτος και συνολικά. Στο τέλος του κεφαλαίου υπάρχουν και οι αντίστοιχες γραφικές παραστάσεις με τις οποίες γίνεται πιο σαφής η εικόνα της μελέτης. Στο δεύτερο μέρος της εργασίας αναφέρεται σε μία από τις κυριότερες εφαρμογές του τηλεσκοπίου Αρίσταρχος: στη μελέτη των ενεργών γαλαξιακών πυρήνων. Έτσι, παρουσιάζονται αναλυτικά οι ιδιότητες τόσο των κανονικών όσο και των ενεργών γαλαξιών. Στη συνέχεια αναφέρεται εκτενώς η θεωρία περί ύπαρξης γιγάντιων μελανών οπών στους πυρήνες των ενεργών γαλαξιών, οι οποίες πιθανότατα αποτελούν τον κύριο μηχανισμό παραγωγής των τεράστιων ποσών ενέργειας που αυτοί εκπέμπουν, καθώς και οι προσπάθειες που γίνονται για ανίχνευσή τους. Στο τέλος, υπάρχει αναφορά στα γαλαξιακά σμήνη και στις συγκρούσεις μεταξύ των γαλαξιών, οι οποίες πιθανότατα ευθύνονται για την τροφοδοσία των ανενεργών γιγάντιων μελανών οπών με νέα καύσιμη ύλη καθιστώντας τις και πάλι ενεργές. / This work is seperated in two parts. At the first one, the main goal is to determine the cloudiness fraction at the area of Aristarchos telescope for the optimizing of the image quality. At the first chapter we give a compendiously description of the New Greek Telescope Aristarchos and its instruments. At the second chapter, there is an outline of the most usual astronomical measurments and of the difficulties that an astronomer confront. Also, there is a detailed analysis of the atmospheric ert purbations at the image quality, because of the turbulence and the different densities between the layers of the atmosphere. At the third chapter there is a deteiled presentation of the data that obtained from meteorogical setellite MeteoSAT 7. At the existing tables there is a detailed presentation of the cloudiness at the Aristarchos and Penteli observatories. This h data refers to the period between 2004 and 2007 and for every 6 ours (00:00 UTC, 06:00 UTC, 12:00 UTC and 18:00 UTC). At the forth chapter the is a detailed analysis of the obtained data and the calculation of the cloudiness fraction for the Aristarchos and Penteli observato‐ ries. This analysis splits in three diferrent parts, for each hour, for each month and for each year. At the second part of this work is dedicated at the probably most important application of the Aristarchios telescope: the study of the active galactic nucleis. There is a presentation of the properties of the galaxies. Next, we refer at the theory of massive black holes at the center of active galaxies, which represents the main mechanism of power supply for the AGNs.
52

Iterative algorithms for fast, signal-to-noise ratio insensitive image restoration

Lie Chin Cheong, Patrick January 1987 (has links)
No description available.
53

WAVELET AND SINE BASED ANALYSIS OF PRINT QUALITY EVALUATIONS

Mahalingam, Vijay Venkatesh 01 January 2004 (has links)
Recent advances in imaging technology have resulted in a proliferation of images across different media. Before it reaches the end user, these signals undergo several transformations, which may introduce defects/artifacts that affect the perceived image quality. In order to design and evaluate these imaging systems, perceived image quality must be measured. This work focuses on analysis of print image defects and characterization of printer artifacts such as banding and graininess by using a human visual system (HVS) based framework. Specifically the work addresses the prediction of visibility of print defects (banding and graininess) by representing the print defects in terms of the orthogonal wavelet and sinusoidal basis functions and combining the detection probabilities of each basis functions to predict the response of the human visual system (HVS). The detection probabilities for basis function components and the simulated print defects are obtained from separate subjective tests. The prediction performance from both the wavelet based and sine based approaches is compared with the subjective testing results .The wavelet based prediction performs better than the sinusoidal based approach and can be a useful technique in developing measures and methods for print quality evaluations based on HVS.
54

Micro-satellite Camera Design

Balli, Gulsum Basak 01 January 2003 (has links) (PDF)
The aim of this thesis has been summarized as the design of a micro-satellite camera system and its focal plane simulations. The average micro-satellite orbit heights ranges in between 600-850 km and obviously a multipayload satellite brings volume and power restrictions for each payload. In this work, an orbit height of 600 km and a volume of 20&times / 20&times / 30 cm is assumed, since minimizing the payload dimensions increases the probability of the launch. The pixel size and the dimensions of an imaging detector such as charge-coupled device (CCD) have been defined by the useful image area with acceptable aberration limits on the focal plane. In order to predict the minimum pixel size to be used at the focal plane modulation transfer function (MTF), point spread function (PSF), image distortion and aberration simulations have been carried out and detector parameters for the designed camera have been presented.
55

On optimality and efficiency of parallel magnetic resonance imaging reconstruction: challenges and solutions

Nana, Roger 12 November 2008 (has links)
Imaging speed is an important issue in magnetic resonance imaging (MRI), as subject motion during image acquisition is liable to produce artifacts in the image. However, the speed at which data can be collected in conventional MRI is fundamentally limited by physical and physiological constraints. Parallel MRI is a technique that utilizes multiple receiver coils to increase the imaging speed beyond previous limits by reducing the amount of acquired data without degrading the image quality. In order to remove the image aliasing due to k-space undersampling, parallel MRI reconstructions invert the encoding matrix that describes the net effect of the magnetic field gradient encoding and the coil sensitivity profiles. The accuracy, stability, and efficiency of a matrix inversion strategy largely dictate the quality of the reconstructed image. This thesis addresses five specific issues pertaining to this linear inverse problem with practical solutions to improve clinical and research applications. First, for reconstruction algorithms adopting a k-space interpolation approach to the linear inverse problem, two methods are introduced that automatically select the optimal k-space subset samples participating in the synthesis of a missing datum, guaranteeing an optimal compromise between accuracy and stability, i.e. the best balance between artifacts and signal-to-noise ratio (SNR). While the former is based on cross-validation re-sampling technique, the second utilizes a newly introduced data consistency error (DCE) metric that exploits the shift invariance property of the reconstruction kernel to provide a goodness measure of k-space interpolation in parallel MRI. Additionally, the utility of DCE as a metric for characterizing and comparing reconstruction methods is demonstrated. Second, a DCE-based strategy is introduced to improve reconstruction efficiency in real time parallel dynamic MRI. Third, an efficient and reliable reconstruction method that operates on gridded k-space for parallel MRI using non-Cartesian trajectories is introduced with a significant computational gain for applications involving repetitive measurements. Finally, a pulse sequence that combines parallel MRI and multi-echo strategy is introduced for improving SNR and reducing the geometric distortion in diffusion tensor imaging. In addition, the sequence inherently provides a T2 map, complementing information that can be useful for some applications.
56

Studies on the salient properties of digital imagery that impact on human target acquisition and the implications for image measures.

Ewing, Gary John January 1999 (has links)
Electronically displayed images are becoming increasingly important as an interface between man and information systems. Lengthy periods of intense observation are no longer unusual. There is a growing awareness that specific demands should be made on displayed images in order to achieve an optimum match with the perceptual properties of the human visual system. These demands may vary greatly, depending on the task for which the displayed image is to be used and the ambient conditions. Optimal image specifications are clearly not the same for a home TV, a radar signal monitor or an infrared targeting image display. There is, therefore, a growing need for means of objective measurement of image quality, where "image quality" is used in a very broad sense and is defined in the thesis, but includes any impact of image properties on human performance in relation to specified visual tasks. The aim of this thesis is to consolidate and comment on the image measure literatures, and to find through experiment the salient properties of electronically displayed real world complex imagery that impacts on human performance. These experiments were carried out for well specified visual tasks (of real relevance), and the appropriate application of image measures to this imagery, to predict human performance, was considered. An introduction to certain aspects of image quality measures is given, and clutter metrics are integrated into this concept. A very brief and basic introduction to the human visual system (HVS) is given, with some basic models. The literature on image measures is analysed, with a resulting classification of image measures, according to which features they were attempting to quantify. A series of experiments were performed to evaluate the effects of image properties on human performance, using appropriate measures of performance. The concept of image similarity was explored, by objectively measuring the subjective perception of imagery of the same scene, as obtained through different sensors, and which underwent different luminance transformations. Controlled degradations were introduced, by using image compression. Both still and video compression were used to investigate both spatial and temporal aspects of HVS processing. The effects of various compression schemes on human target acquisition performance were quantified. A study was carried out to determine the "local" extent, to which the clutter around a target, affects its detectability. It was found in this case, that the excepted wisdom, of setting the local domain (support of the metric) to twice the expected target size, was incorrect. The local extent of clutter was found to be much greater, with this having implications for the application of clutter metrics. An image quality metric called the gradient energy measure (GEM), for quantifying the affect of filtering on Nuclear Medicine derived images, was developed and evaluated. This proved to be a reliable measure of image smoothing and noise level, which in preliminary studies agreed with human perception. The final study discussed in this thesis determined the performance of human image analysts, in terms of their receiver-operating characteristic, when using Synthetic Aperture Radar (SAR) derived images in the surveillance context. In particular, the effects of target contrast and background clutter on human analyst target detection performance were quantified. In the final chapter, suggestions to extend the work of this thesis are made, and in this context a system to predict human visual performance, based on input imagery, is proposed. This system intelligently uses image metrics based on the particular visual task and human expectations and human visual system performance parameters. / Thesis (Ph.D.)--Medical School; School of Computer Science, 1999.
57

Analysis and Performance Optimization of a GPGPU Implementation of Image Quality Assessment (IQA) Algorithm VSNR

January 2017 (has links)
abstract: Image processing has changed the way we store, view and share images. One important component of sharing images over the networks is image compression. Lossy image compression techniques compromise the quality of images to reduce their size. To ensure that the distortion of images due to image compression is not highly detectable by humans, the perceived quality of an image needs to be maintained over a certain threshold. Determining this threshold is best done using human subjects, but that is impractical in real-world scenarios. As a solution to this issue, image quality assessment (IQA) algorithms are used to automatically compute a fidelity score of an image. However, poor performance of IQA algorithms has been observed due to complex statistical computations involved. General Purpose Graphics Processing Unit (GPGPU) programming is one of the solutions proposed to optimize the performance of these algorithms. This thesis presents a Compute Unified Device Architecture (CUDA) based optimized implementation of full reference IQA algorithm, Visual Signal to Noise Ratio (VSNR) that uses M-level 2D Discrete Wavelet Transform (DWT) with 9/7 biorthogonal filters among other statistical computations. The presented implementation is tested upon four different image quality databases containing images with multiple distortions and sizes ranging from 512 x 512 to 1600 x 1280. The CUDA implementation of VSNR shows a speedup of over 32x for 1600 x 1280 images. It is observed that the speedup scales with the increase in size of images. The results showed that the implementation is fast enough to use VSNR on high definition videos with a frame rate of 60 fps. This work presents the optimizations made due to the use of GPU’s constant memory and reuse of allocated memory on the GPU. Also, it shows the performance improvement using profiler driven GPGPU development in CUDA. The presented implementation can be deployed in production combined with existing applications. / Dissertation/Thesis / Masters Thesis Computer Science 2017
58

Radiation Dose Optimization For Critical Organs

January 2013 (has links)
abstract: Ionizing radiation used in the patient diagnosis or therapy has negative effects on the patient body in short term and long term depending on the amount of exposure. More than 700,000 examinations are everyday performed on Interventional Radiology modalities [1], however; there is no patient-centric information available to the patient or the Quality Assurance for the amount of organ dose received. In this study, we are exploring the methodologies to systematically reduce the absorbed radiation dose in the Fluoroscopically Guided Interventional Radiology procedures. In the first part of this study, we developed a mathematical model which determines a set of geometry settings for the equipment and a level for the energy during a patient exam. The goal is to minimize the amount of absorbed dose in the critical organs while maintaining image quality required for the diagnosis. The model is a large-scale mixed integer program. We performed polyhedral analysis and derived several sets of strong inequalities to improve the computational speed and quality of the solution. Results present the amount of absorbed dose in the critical organ can be reduced up to 99% for a specific set of angles. In the second part, we apply an approximate gradient method to simultaneously optimize angle and table location while minimizing dose in the critical organs with respect to the image quality. In each iteration, we solve a sub-problem as a MIP to determine the radiation field size and corresponding X-ray tube energy. In the computational experiments, results show further reduction (up to 80%) of the absorbed dose in compare with previous method. Last, there are uncertainties in the medical procedures resulting imprecision of the absorbed dose. We propose a robust formulation to hedge from the worst case absorbed dose while ensuring feasibility. In this part, we investigate a robust approach for the organ motions within a radiology procedure. We minimize the absorbed dose for the critical organs across all input data scenarios which are corresponding to the positioning and size of the organs. The computational results indicate up to 26% increase in the absorbed dose calculated for the robust approach which ensures the feasibility across scenarios. / Dissertation/Thesis / Ph.D. Industrial Engineering 2013
59

Avaliação da qualidade da imagem e taxa de exposição na cardiologia intervencionista / Evaluation of Image Quality and Exposure Rate in Interventional Cardiology.

Roberto Contreras Pitorri 14 October 2013 (has links)
A Fluoroscopia é uma técnica de obtenção de imagens de raios X, através de um detector de imagens dinâmicas, que permite o acompanhamento de exames de órgãos em tempo real. Os detectores atualmente utilizados são os intensificadores de imagem (II) e os Flat Panel (FP), os primeiros (do tipo válvula) tem a principal função de aumentar o brilho na imagem e mais recentemente os segundos (de estado sólido), também sido utilizados nos equipamentos de fluoroscopia para melhorar a qualidade da imagem (contraste e detalhe), diminuindo ruídos e artefatos na mesma. Os exames gerais que utilizam a técnica de fluoroscopia servem para cabeça, tórax e abdômen, que antes eram realizados com um mesmo tipo de equipamento, mas devido a evolução da tecnologia equipamentos dedicados à esses exames passaram a ser utilizados. O objetivo deste trabalho é analisar especificamente um grupo de equipamentos de fluoroscopia cardíaca (de diferentes instituições e fabricantes) para inferir como estão os parâmetros de contraste, detalhe e taxa de exposição no detector (também interessante para os serviços de manutenção) em relação a sua média e também como estão em relação às referências internacionais. Para tal, foi desenvolvido um objeto simulador e um protocolo de testes incluindo medidas de taxa de exposição e análises dos parâmetros obtidos, a saber: a) testes preliminares para aceitação do equipamento para a amostragem, b) de detalhe e de contraste (utilizando o objeto simulador desenvolvido) para a obtenção do seu produto denominado por FOM (Figure of Merit), c) as medidas de taxa de exposição que chega no detector e d) as análises das distribuições dos resultados obtidos com os dois grupos de detectores, quanto as suas médias e comparação dessas (equipamentos utilizados no Brasil) com os valores de referência da literatura internacional. Do trabalho realizado foi possível comprovar que o objeto simulador e o protocolo desenvolvido, juntos à metodologia aplicada, foram adequados para auxiliar no controle de qualidade dos equipamentos selecionados, vii classificando-os quanto aos potenciais de otimização de FOM e TEEDI. Os FOMs médios do II e do FP distam do FOM referência de 35,5 % e 35,0 % e as TEEDIs médias para os II e FP distam da TEDDI referência respectivamente de 13,8% e 24,9% Estes últimos deverão ser ajustados pela manutenção para trazê-los mais próximos das referências utilizadas nas distribuições obtidas. / Fluoroscopy is a X ray technique used to obtain images through a dynamic image detector or sensor that allows to follow the organs movements in real time exams. Nowadays the detectors used are the image intensifier (II) e the Flat Panel (FP). The first one type valve has the main function to enhance the image brightness and more recently was developed the second one, (solid state technology) which the main function is to enhance the image quality (contrast and detail) minimizing the noise and artifacts in itself. Head, thorax and abdomen are the body sections which the general fluoroscopy deals and that was performed with one only type of equipment. Actually these exams are performed with dedicated machines due to the technology evolution and several manufactures are responsible for theirs development and assembly in several continents (Americas, Europe and Asia). The scope of this work is to analize two groups of fluoroscopy equipments (II and FP detectors), dedicated only for cardiac fluoroscopy and from different institutions and manufacturers, in order to infer how the parameters of contrast, detail and exposure rate at the entrance of the detector are in a Quality Control that the maintenance service would be also interested, besides a medical physicist. With these results one could know how the cited groups (through their average results) would be doing related to others groups of equipments or an specific one and even to international references. With this purpose a PMMA simulator object (OS) was developed with an protocol derived from the literature that was composed of exposure rate in the entrance of the detector (TEEDI), tests related to the selection of the equipments to be part of theirs samples, tests of contrast and detail (using the OS) to obtain their product named by FOM (Figure of Merit) and with all results obtained, to analyse the two distribution groups through their averages and comparing them not only with themselves, but also with the references from abroad. With this work it was possible to confirm that the OS, as well the protocol developed together the methodology used, were adequate to perform the quality control of the selected ix equipment samples, classifying them related theirs optimization potentials of FOM and TEEDI. The average FOMs for II and FP are far from the reference by 35,5 % and 35,0 % respectively and the average TEEDIs for II and FP are far from the reference TEDDI respectively by 13,8% and 24,9%. These last one has to be adjusted for the maintenance service (mainly the FP one) in order to bring them more near to the reference used to obtain the distributions.
60

Avaliação da correção de atenuação e espalhamento em imagens SPECT em protocolo cerebral / Evaluation of Attenuation and Scattering Correction in SPECT images of a Cerebral Protocol

Thays Berretta Käsemodel 22 September 2014 (has links)
A tomografia computadorizada por emissão de fóton único (SPECT) é uma das modalidades de diagnóstico na Medicina Nuclear em que se detecta a radiação emitida por um radiofármaco previamente administrado ao paciente. Visto que osfótons emitidos sofrem interações com o corpo do paciente, fazem-se necessárias as correções de atenuação e de espalhamento a fim de melhor representar a distribuição do radiofármaco, e assimresultar em imagens mais precisas. O objetivo deste trabalho é avaliar os parâmetros anotados como padrão para reconstruções de imagens tomográficas e as correções de atenuação e de espalhamento em imagens SPECT do Hospital de Clínicas da Faculdade de Medicina da Universidade de São Paulo, por meio de análises qualitativas e quantitativas das imagens reconstruída a partir das aquisições tomográficas. Sob um protocolo de SPECT-CT cerebral modificado para duas janelas de aquisição, foram adquiridas imagens SPECT e SPECT-CT (BrightView XCT, Philips) utilizando fantomaJaszczak e reconstruídas pelos métodos FBP, MLEM e OSEM. Os resultados mostram que o método FBP apresenta imagens de baixa precisão devido à baixa SNR. A avaliação sugere o uso dos métodos iterativos MLEM e OSEM com correção de atenuação como método padrão de reconstrução de imagens de perfusão cerebral. De acordo com a avaliação de imagens do fantomaJaszczak e análise do contraste entre esfera fria ebackground,propõe-se análise observacional e avaliação das imagens clínicas reconstruídas pelo método OSEM com os parâmetros 3 iterações, 16 subsets, filtro Butterworth com frequência de corte 0,34 e potencia 1, como novos parâmetros padrão de reconstrução de imagens. / Single Photon Emission Computed Tomography (SPECT) is one of the diagnostic modalities in nuclear medicine, it detects the radiation emitted by a radioisotope previously administered to the patient. Since the photons undergo interactions with the patient\'s body,attenuationand scatteringcorrections are necessary in order to best represent the distribution of the radiopharmaceutical, and thus result in more accurate images. The aim of this study is to evaluate the standard parameters for tomographic imagesreconstruction, and attenuation and scatter corrections ofSPECT images, from Hospital das Clínicas da Faculdade de Medicina de RibeirãoPreto, Universidade de São Paulo, through qualitativeand quantitative analysis of the reconstructed image obtained from SPECT aquisitions. Though a modified to two windows of acquisition protocol for cerebral SPECT-CT, we acquired SPECT and SPECT-CT images (BrightView XCT, Philips) using phantom Jaszczak and the ones were reconstructed by FBP, MLEM and OSEM methods. The results show that the FBP method has poor image precision due to low SNR. The review suggests the use of iterative methods MLEM and OSEM with attenuation correction as a standard method of image reconstruction of cerebral perfusion. According to the images the phantom Jaszczak and contrast analysis between cold sphere and background, we propose observational analysis and evaluations of clinical images reconstructed by OSEM method with parameters 3 iterations, 16 subsets, Butterworth filter with cutoff frequency 0.34 and order 1, as newstandard parameters for image reconstruction parameters.

Page generated in 0.0463 seconds