131 |
GPGPU based implementation of BLIINDS-II NR-IQAJanuary 2016 (has links)
abstract: The technological advances in the past few decades have made possible creation and consumption of digital visual content at an explosive rate. Consequently, there is a need for efficient quality monitoring systems to ensure minimal degradation of images and videos during various processing operations like compression, transmission, storage etc. Objective Image Quality Assessment (IQA) algorithms have been developed that predict quality scores which match well with human subjective quality assessment. However, a lot of research still remains to be done before IQA algorithms can be deployed in real world systems. Long runtimes for one frame of image is a major hurdle. Graphics Processing Units (GPUs), equipped with massive number of computational cores, provide an opportunity to accelerate IQA algorithms by performing computations in parallel. Indeed, General Purpose Graphics Processing Units (GPGPU) techniques have been applied to a few Full Reference IQA algorithms which fall under the. We present a GPGPU implementation of Blind Image Integrity Notator using DCT Statistics (BLIINDS-II), which falls under the No Reference IQA algorithm paradigm. We have been able to achieve a speedup of over 30x over the previous CPU version of this algorithm. We test our implementation using various distorted images from the CSIQ database and present the performance trends observed. We achieve a very consistent performance of around 9 milliseconds per distorted image, which made possible the execution of over 100 images per second (100 fps). / Dissertation/Thesis / Masters Thesis Computer Science 2016
|
132 |
Hardware Acceleration of Most Apparent Distortion Image Quality Assessment Algorithm on FPGA Using OpenCLJanuary 2017 (has links)
abstract: The information era has brought about many technological advancements in the past
few decades, and that has led to an exponential increase in the creation of digital images and
videos. Constantly, all digital images go through some image processing algorithm for
various reasons like compression, transmission, storage, etc. There is data loss during this
process which leaves us with a degraded image. Hence, to ensure minimal degradation of
images, the requirement for quality assessment has become mandatory. Image Quality
Assessment (IQA) has been researched and developed over the last several decades to
predict the quality score in a manner that agrees with human judgments of quality. Modern
image quality assessment (IQA) algorithms are quite effective at prediction accuracy, and
their development has not focused on improving computational performance. The existing
serial implementation requires a relatively large run-time on the order of seconds for a single
frame. Hardware acceleration using Field programmable gate arrays (FPGAs) provides
reconfigurable computing fabric that can be tailored for a broad range of applications.
Usually, programming FPGAs has required expertise in hardware descriptive languages
(HDLs) or high-level synthesis (HLS) tool. OpenCL is an open standard for cross-platform,
parallel programming of heterogeneous systems along with Altera OpenCL SDK, enabling
developers to use FPGA's potential without extensive hardware knowledge. Hence, this
thesis focuses on accelerating the computationally intensive part of the most apparent
distortion (MAD) algorithm on FPGA using OpenCL. The results are compared with CPU
implementation to evaluate performance and efficiency gains. / Dissertation/Thesis / Masters Thesis Electrical Engineering 2017
|
133 |
Cone Beam CT ve stomatologii: Pohybové artefakty a jejich redukce / Cone Be am CT in dentistry: Motion artifacts and their reductionHanzelka, Tomáš January 2013 (has links)
Cone Beam Computed Tomography (CBCT) allows effective 3D imaging in dentistry. CBCT consists of a planar detector and a x-ray source that rotate once around patient`s head. The x-ray beam is cone-shaped and is directed through the whole volume of interest. All the data needed are obtained during a single rotation of the source and detector. This rotation takes from several to several tens of seconds, and during this time the CBCT captures several hundred of 2D images. They represent different points of view on the region of interest and are later reconstructed to form a 3D data set. The biggest advantage of CBCT is that it can produce 3D image using at radiation doses similar to those of conventional diagnostic methods used in dentistry (Pauwels et al., 2010). In the experimental part of our experiment, we address one of the biggest weaknesses of CBCT - patient movement during scanning which has a major impact on the image quality and is currently the main limiting factor in the further development of this technology. In the first part of our experiment, we recorded movements of patients and CBCT scanner using a high speed camera and subsequently analyzed the data in MatLab program. Significant level of patient motion as well as motion of CBCT scanner was demonstrated. Motion was highest at the...
|
134 |
Texture Structure AnalysisJanuary 2014 (has links)
abstract: Texture analysis plays an important role in applications like automated pattern inspection, image and video compression, content-based image retrieval, remote-sensing, medical imaging and document processing, to name a few. Texture Structure Analysis is the process of studying the structure present in the textures. This structure can be expressed in terms of perceived regularity. Our human visual system (HVS) uses the perceived regularity as one of the important pre-attentive cues in low-level image understanding. Similar to the HVS, image processing and computer vision systems can make fast and efficient decisions if they can quantify this regularity automatically. In this work, the problem of quantifying the degree of perceived regularity when looking at an arbitrary texture is introduced and addressed. One key contribution of this work is in proposing an objective no-reference perceptual texture regularity metric based on visual saliency. Other key contributions include an adaptive texture synthesis method based on texture regularity, and a low-complexity reduced-reference visual quality metric for assessing the quality of synthesized textures. In order to use the best performing visual attention model on textures, the performance of the most popular visual attention models to predict the visual saliency on textures is evaluated. Since there is no publicly available database with ground-truth saliency maps on images with exclusive texture content, a new eye-tracking database is systematically built. Using the Visual Saliency Map (VSM) generated by the best visual attention model, the proposed texture regularity metric is computed. The proposed metric is based on the observation that VSM characteristics differ between textures of differing regularity. The proposed texture regularity metric is based on two texture regularity scores, namely a textural similarity score and a spatial distribution score. In order to evaluate the performance of the proposed regularity metric, a texture regularity database called RegTEX, is built as a part of this work. It is shown through subjective testing that the proposed metric has a strong correlation with the Mean Opinion Score (MOS) for the perceived regularity of textures. The proposed method is also shown to be robust to geometric and photometric transformations and outperforms some of the popular texture regularity metrics in predicting the perceived regularity. The impact of the proposed metric to improve the performance of many image-processing applications is also presented. The influence of the perceived texture regularity on the perceptual quality of synthesized textures is demonstrated through building a synthesized textures database named SynTEX. It is shown through subjective testing that textures with different degrees of perceived regularities exhibit different degrees of vulnerability to artifacts resulting from different texture synthesis approaches. This work also proposes an algorithm for adaptively selecting the appropriate texture synthesis method based on the perceived regularity of the original texture. A reduced-reference texture quality metric for texture synthesis is also proposed as part of this work. The metric is based on the change in perceived regularity and the change in perceived granularity between the original and the synthesized textures. The perceived granularity is quantified through a new granularity metric that is proposed in this work. It is shown through subjective testing that the proposed quality metric, using just 2 parameters, has a strong correlation with the MOS for the fidelity of synthesized textures and outperforms the state-of-the-art full-reference quality metrics on 3 different texture databases. Finally, the ability of the proposed regularity metric in predicting the perceived degradation of textures due to compression and blur artifacts is also established. / Dissertation/Thesis / Ph.D. Electrical Engineering 2014
|
135 |
Desentrelaçamento de vídeo com suporte de um detector de efeito feathering e de um índice de concentração de artefatos / Video de-interlacing with support from a feathering effect detector and an artifact concentration indexAndré Luis Martins 27 February 2018 (has links)
Este trabalho apresenta uma nova solução para conversão de campos de vídeo entrelaçado em quadros progressivos, processo conhecido como desentrelaçamento. Os algoritmos de desentrelaçamento \"estado da arte\", na tentativa de evitar a geração de artefatos de vídeo do tipo \"efeito feathering\", tendem a gerar borramento e degradar a qualidade da imagem. O objetivo é melhorar a qualidade dos quadros produzidos pelo processo de desentrelaçamento pela combinação de dois processos já existentes, sendo um deles do tipo intracampo e outro do tipo intercampos. A estratégia proposta se baseia na identificação de artefatos gerados por um processo de desentrelaçamento intercampos, com suporte de um detector de artefatos do tipo \"efeito feathering\" e da análise dos dados gerados por este detector utilizando um \"índice de aglomeração de artefatos\", que neste trabalho foi denominado Spot Index. As regiões afetadas pelo efeito feathering são identificadas e substituídas pelas regiões equivalentes extraídas de um quadro gerado por um método intracampo, normalmente pouco afetado por feathering. Os testes demonstraram que a estratégia proposta é capaz de produzir quadros desentrelaçados com maior qualidade visual do que a obtida com um único tipo de método aplicado de modo global, pois é capaz de explorar, extrair e combinar as qualidades de cada método. Uma avaliação estatística de hipóteses demonstrou que a estratégia proposta traz vantagens consideráveis em relação às técnicas aplicadas de modo global. / This work presents a new solution for converting interlaced video fields into progressive frames, a process known as video deinterlacing. Current state-of-the-art de-interlacing algorithms, in an attempt to avoid the generation of feathering effect video artifacts, tend to generate blurring and degrade image quality. The objective is to improve the quality of the frames produced by the de-interlacing process by combining two existing processes, one of them being intra-field and the other an inter-field type. The proposed strategy is based on the identification of artifacts generated by an inter-field deinterlacing process, supported by a \"feathering effect\" artifact detector and the analysis of the data generated by this detector using an \"agglomeration index of artifacts\", which in this work was called \"Spot Index\". The regions affected by the feathering effect are identified and replaced by the equivalent regions extracted from a frame produced by an intra-field process known as \"Edge-based Line Averaging\" (ELA). Tests have demonstrated that the proposed strategy is able to produce deinterlaced frames with higher quality than the one obtained with a single type of method applied in a global way, because it is able to explore, extract and combine the qualities of each method. A statistical evaluation of hypotheses has shown that the proposed strategy brings considerable advantages over globally applied deinterlacing techniques.
|
136 |
A Review of Perceptual Image QualityPetersson, Jonas January 2005 (has links)
What is meant with print quality, what makes people perceive the quality of an image in a certain way? An inquiry was made about what the parameters are that strongly affect the perception of digital printed images. A subjective test and some measurements make the basis for the thesis. The goal was to find a tool to predict perceived image quality when investigating the connections between the subjective test and the measurements. Some suitable images were chosen, with a variety of motifs. A test panel consisting of people that are used to observe image quality answered questions about the perception of the quality. Measurements were made on a special test form to get information about the six different printers used in the investigation. One of the discoveries was made when two images with the same colorful motif were compared. The first image got a much higher grade for general quality than the second image, even though the second image was printed with a printer that had a larger color gamut. The reason of this is that the first image consists of more saturated colors, and the second image has more details. The human eye perceives the more saturated image to be better than the image with more details. Another discovery was the correlation between the perceived general quality of a colored image and the perceived color gamut. One conclusion was that a great difference between two calculated color gamuts resulted in a large difference in perception of the color gamuts. A discovery of an image with very few colors and many glossy surfaces was that print mottle and sharpness are strictly connected to the general quality.
|
137 |
Quality-driven control of a robotized ultrasound probe / Optimisation de la qualité de l'image échographique par asservissement d'une sonde ultrasonore robotiséeChatelain, Pierre 12 December 2016 (has links)
La manipulation robotique d'une sonde échographique a été un important sujet de recherche depuis plusieurs années. Plus particulièrement, des méthodes d'asservissement visuel guidé par échographie ont été développées pour accomplir différentes tâches, telles que la compensation de mouvement, le maintien de la visibilité d'une structure pendant la téléopération, ou le suivi d'un instrument chirurgical. Cependant, en raison de la nature des images échographiques, garantir une bonne qualité d'image durant l'acquisition est un problème difficile, qui a jusqu'ici été très peu abordé. Cette thèse traite du contrôle de la qualité des images échographiques acquises par une sonde robotisée. La qualité du signal acoustique au sein de l'image est représentée par une carte de confiance, qui est ensuite utilisée comme signal d'entrée d'une loi de commande permettant d'optimiser le positionnement de la sonde échographique. Une commande hybride est également proposée pour optimiser la fenêtre acoustique pour une cible anatomique qui est détectée dans l'image. L'approche proposée est illustrée dans le cas d'un scénario de télé-échographie, où le contrôle de la sonde est partagé entre la machine et le téléopérateur. / The robotic guidance of an ultrasound probe has been extensively studied as a way to assist sonographers in performing an exam. In particular, ultrasound-based visual servoing methods have been developed to fulfill various tasks, such as compensating for physiological motion, maintaining the visibility of an anatomic target during teleoperation, or tracking a surgical instrument. However, due to the specific nature of ultrasound images, guaranteeing a good image quality during the procedure remains an unaddressed challenge. This thesis deals with the control of ultrasound image quality for a robot-held ultrasound probe. The ultrasound signal quality within the image is represented by a confidence map, which is used to design a servo control law for optimizing the placement of the ultrasound probe. A control fusion is also proposed to optimize the acoustic window for a specific anatomical target which is tracked in the ultrasound images. The method is illustrated in a teleoperation scenario, where the control is shared between the automatic controller and a human operator.
|
138 |
Aspects of dental cone-beam computed tomography in children and young peopleHidalgo Rivas, Jose Alejandro January 2014 (has links)
Cone-beam computed tomography (CBCT) has become increasingly popular in dentistry. It is usually associated with radiation doses that are lower than those seen with conventional computed tomography (CT) but greater than those seen with dental radiography. Because exposure to ionising radiation is associated with risks, the radiation protection principles of justification and optimisation should be applied. These are especially important in children and young people due to their greater risk of developing stochastic effects. Justification requires a balancing of the radiation risk with the potential benefits and the latter is dependent on diagnostic efficacy. There has been a proliferation of articles published on dental CBCT and there is a need to review this systematically so that diagnostic efficacy can be judged. In terms of optimisation, radiation dose reduction can be achieved in various ways, but the use of barrier materials to protect younger patients in CBCT has not been adequately tested. Reduction in exposure parameters in CBCT will lower doses but at the expense of a loss of image quality. While some efforts have been made to relate radiation exposure and image quality in CBCT, there is a need to develop low-dose CBCT protocols specifically for children and young people. The first aim of this thesis was to survey current uses of CBCT in children and young people in three United Kingdom dental hospitals. The second aim was to determine the efficacy of thyroid shielding in a child phantom testing several different designs, materials and thickness of thyroid shields. The third aim was to evaluate the evidence on diagnostic efficacy of dental CBCT for root fractures in permanent, non-endodontically treated, anterior teeth by conducting a systematic review. The fourth aim was to evaluate objective and subjective image quality in a laboratory study to determine a low-dose CBCT protocol which maintains adequate diagnostic image quality for a clinical indication in children. Finally, the aim was to evaluate this low-dose protocol in terms of image quality in real clinical situations. A high adherence to the European guidelines No 172 on radiation protection in dental CBCT was found amongst the surveyed hospitals. Thyroid shielding was found to be effective in dose reduction when performing a large field of view CBCT scan in a child phantom, but design influenced efficacy. The systematic review showed that research articles investigating CBCT diagnostic accuracy for vertical and horizontal root fractures had deficiencies in methodology, while only one study was identified addressing higher levels of diagnostic efficacy. A low-dose imaging protocol was identified in a laboratory study, which has been shown to be an effective tool in dose reduction providing an adequate diagnostic image quality and reducing radiation doses considerably for clinical indications in the anterior maxilla in children and young people.
|
139 |
Effect of exposure charts on reject rate of extremity radiographsKalondo, Luzanne January 2010 (has links)
This study discusses reject film analyses (RFAs) before and after the implementation of a quality improvement intervention. RFAs were undertaken to investigate the effect of the introduction and use of exposure charts (ECs) on department and student reject rates of extremity radiographs. Methods: A quantitative comparative pre and post-treatment research design was used. Data was collected from the x-ray departments of two training hospitals in Windhoek, Namibia over a five month period. A retrospective RFA was conducted to determine the department and student reject rates for both departments before intervention. Emphasis was placed on exposure related reject films. ECs were compiled and introduced at Katutura State Hospital (venue B) by the researcher. The students were instructed to use these charts. At Windhoek Central Hospital (venue A) no ECs were used. A prospective RFA was conducted to establish department and student reject rates at both hospitals after the intervention at venue B. Results: During the retrospective phase the department reject rate for venue A was 21 percent while the student reject rate was 23 percent. At venue B 24 percent and 26 percent were scored respectively. Students at venue A produced rejected radiographs due to overexposure (49 percent) and underexposure (23 percent), whilst 37 percent was recorded for both causes at venue B. At venue A, 35 percent of films were rejected due to incorrect mAs selection, at venue B the figure was 42 percent. Undiagnostic radiographs due to inaccurate kV selection comprised 62 percent for venue A and 59 percent for venue B. During the prospective phase the department reject rate for venue A was 20 percent and that of the students was 19 percent. For venue B 12 percent and 11 percent were scored respectively. At venue A radiographs rejected due to over and underexposure were 43 percent and 33 percent respectively while those at venue B were 33 percent and 34 percent. Incorrect mAs selection caused 33 percent of discarded films at venue A and 38 percent at venue B. The figures for inaccurate kV selection were 68 percent and 62 percent for venues A and B. Conclusions: The introduction and use of ECs lowered the student reject rate at venue B in the prospective phase.
|
140 |
Evaluation of Image Quality of Pituitary Dynamic Contrast-Enhanced MRI Using Time-Resolved Angiography With Interleaved Stochastic Trajectories (TWIST) and Iterative Reconstruction TWIST (IT-TWIST) / TWIST法と繰り返し再構成併用TWIST法を用いた下垂体ダイナミック造影MRIの画質評価Yokota, Yusuke 23 September 2020 (has links)
京都大学 / 0048 / 新制・課程博士 / 博士(医学) / 甲第22743号 / 医博第4661号 / 新制||医||1046(附属図書館) / 京都大学大学院医学研究科医学専攻 / (主査)教授 花川 隆, 教授 渡邉 大, 教授 黒田 知宏 / 学位規則第4条第1項該当 / Doctor of Medical Science / Kyoto University / DFAM
|
Page generated in 0.0557 seconds