• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 158
  • 37
  • 21
  • 10
  • 9
  • 6
  • 3
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 300
  • 300
  • 86
  • 59
  • 58
  • 56
  • 48
  • 41
  • 40
  • 38
  • 36
  • 31
  • 28
  • 26
  • 25
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
241

CONTENT UNDERSTANDING FOR IMAGING SYSTEMS: PAGE CLASSIFICATION, FADING DETECTION, EMOTION RECOGNITION, AND SALIENCY BASED IMAGE QUALITY ASSESSMENT AND CROPPING

Shaoyuan Xu (9116033) 12 October 2021 (has links)
<div>This thesis consists of four sections which are related with four research projects.</div><div><br></div><div>The first section is about Page Classification. In this section, we extend our previous approach which could classify 3 classes of pages: Text, Picture and Mixed, to 5 classes which are: Text, Picture, Mixed, Receipt and Highlight. We first design new features to define those two new classes and then use DAG-SVM to classify those 5 classes of images. Based on the results, our algorithm performs well and is able to classify 5 types of pages.</div><div><br></div><div>The second section is about Fading Detection. In this section, we develop an algorithm that can automatically detect fading for both text and non-text region. For text region, we first do global alignment and then perform local alignment. After that, we create a 3D color node system, assign each connected component to a color node and get the color difference between raster page connected component and scanned page connected. For non-text region, after global alignment, we divide the page into "super pixels" and get the color difference between raster super pixels and testing super pixels. Compared with the traditional method that uses a diagnostic page, our method is more efficient and effective.</div><div><br></div><div>The third section is about CNN Based Emotion Recognition. In this section, we build our own emotion recognition classification and regression system from scratch. It includes data set collection, data preprocessing, model training and testing. We extend the model to real-time video application and it performs accurately and smoothly. We also try another approach of solving the emotion recognition problem using Facial Action Unit detection. By extracting Facial Land Mark features and adopting SVM training framework, the Facial Action Unit approach achieves comparable accuracy to the CNN based approach.</div><div><br></div><div>The forth section is about Saliency Based Image Quality Assessment and Cropping. In this section, we propose a method of doing image quality assessment and recomposition with the help of image saliency information. Saliency is the remarkable region of an image that attracts people's attention easily and naturally. By showing everyday examples as well as our experimental results, we demonstrate the fact that, utilizing the saliency information will be beneficial for both tasks.</div>
242

Analýza iterativně rekonstruovaných CT dat: nové metody pro měření obrazové kvality / Analysis of Iteratively Reconstructed CT Data: Novel Methods for Measuring Image Quality

Walek, Petr January 2019 (has links)
Se zvyšující se dostupností medicínského CT vyšetření a s rostoucím počtem patologických stavů, pro které je indikováno, se redukce pacientské dávky ionizujícího záření stává stále aktuálnějším tématem. Výrazný pokrok v tomto odvětví představují nové metody rekonstrukce obrazů z projekcí, tzv. moderní iterativní rekonstrukční metody. Zároveň se zavedením těchto metod vzrostla potřeba pro měření obrazové kvality. Kvalita iterativně rekonstruovaných dat byla doposud kvantitativně hodnocena pouze na fantomových datech nebo na malých oblastech zájmu v reálných pacientských datech. Charakter iterativně rekonstruovaných dat však naznačuje, že tyto přístupy nadále nejsou dostatečné a je nutné je nahradit přístupy novými. Hlavním cílem této dizertační práce je navrhnout nové přístupy k měření kvality CT obrazových dat, které budou respektovat specifika iterativně rekonstruovaných obrazů a budou počítána plně automaticky přímo z reálných pacientských dat.
243

Detekce a hodnocení zkreslených snímků v obrazových sekvencích / Detection and evaluation of distorted frames in retinal image data

Vašíčková, Zuzana January 2020 (has links)
Diplomová práca sa zaoberá detekciou a hodnotením skreslených snímok v retinálnych obrazových dátach. Teoretická časť obsahuje stručné zhrnutie anatómie oka a metód hodnotenia kvality obrazov všeobecne, ako aj konkrétne hodnotenie retinálnych obrazov. Praktická časť bola vypracovaná v programovacom jazyku Python. Obsahuje predspracovanie dostupných retinálnych obrazov za účelom vytvorenia vhodného datasetu. Ďalej je navrhnutá metóda hodnotenia troch typov šumu v skreslených retinálnych obrazoch, presnejšie pomocou Inception-ResNet-v2 modelu. Táto metóda nebola prijateľná a navrhnutá bola teda iná metóda pozostávajúca z dvoch krokov - klasifikácie typu šumu a následného hodnotenia úrovne daného šumu. Pre klasifikáciu typu šumu bolo využité filtrované Fourierove spektrum a na hodnotenie obrazu boli využité príznaky extrahované pomocou ResNet50, ktoré vstupovali do regresného modelu. Táto metóda bola ďalej rozšírená ešte o krok detekcie zašumených snímok v retinálnych sekvenciách.
244

Zlepšování kvality digitalizovaných textových dokumentů / Document Quality Enhancement

Trčka, Jan January 2020 (has links)
The aim of this work is to increase the accuracy of the transcription of text documents. This work is mainly focused on texts printed on degraded materials such as newspapers or old books. To solve this problem, the current method and problems associated with text recognition are analyzed. Based on the acquired knowledge, the implemented method based on GAN network architecture is chosen. Experiments are a performer on these networks in order to find their appropriate size and their learning parameters. Subsequently, testing is performed to compare different learning methods and compare their results. Both training and testing is a performer on an artificial data set. Using implemented trained networks increases the transcription accuracy from 65.61 % for the raw damaged text lines to 93.23 % for lines processed by this network.
245

Modelování vlastností modelu HVS v Matlabu / Simulation of the HVS characterstics in Matlab

Ševčík, Martin January 2008 (has links)
In theoretical part Diploma thesis deals with the model of human vision HVS (Human Visual System), which can be used for image quality assessment in TV technique area. It has been described calculations of selected JND (Just Noticeable Difference) metrics, used in evaluation of HVS. In practical part of the thesis it has been suggested and realized simulation model in Matlab, which may be used for evaluation of three JND metrics from color and grayscale images and evaluation in spatial a frequency domain. Results of JND models have been compared to another objective image quality evaluation metrics (MSE, NMSE, SNR and PSNR). For interpretation of dependencies it has been used images with different defined content.
246

Zvýraznění biomedicinských obrazových signálů / Enhancement of bio-medical image signals

Gregor, Michal January 2010 (has links)
When scanning biomedical images by magnetic resonance or ultrasound, unwanted elements in the form of noise are entered to the image. With help of various methods it is possible the noise from the image partially remove. There are many methods for noise reduction and every one works on a different principle. As a result of this the results of these methods are different and is necessary for them to be objectively assessed. There is use for the adjustment of the images wavelet transformation and some treshold techniques in the work. The quality of the resulting pictures is tested by the methods for objective quallity tests. Testing was done in the MATLAB program environment on the pictures from magnetic resonance and pictures from ultrasound.
247

Quality Assurance of Intra-oral X-ray Images

Daba, Dieudonne Diba January 2020 (has links)
Dental radiography is one of the most frequent types of diagnostic radiological investigations performed. The equipment and techniques used are constantly evolving. However, dental healthcare has long been an area neglected by radiation safety legislation and the medical physicist community, and thus, the quality assurance (QA) regime needs an update. This project aimed to implement and evaluate objective tests of key image quality parameters for intra-oral (IO) X-ray images. The image quality parameters assessed were sensitivity, noise, uniformity, low-contrast resolution, and spatial resolution. These parameters were evaluated for repeatability at typical tube current, voltage, and exposure time settings by computing the coefficient of variation (CV) of the mean value of each parameter from multiple images. A further aim was to develop a semi-quantitative test for the correct alignment of the position indicating device (PID) with the primary collimator. The overall purpose of this thesis was to look at ways to improve the QA of IO X-rays systems by digitizing and automating part of the process. A single image receptor and an X-ray tube were used in this study. Incident doses at the receptor were measured using a radiation meter. The relationship between incident dose at the receptor and the output signal was used to determine the signal transfer curve for the receptor. The principal sources of noise in the practical exposure range of the system were investigated using a separation of noise sources based upon variance. The transfer curve of the receptor was found to be linear. Noise separation showed that quantum noise was the dominant noise. Repeatability of the image quality parameters assessed was found to be acceptable. The CV for sensitivity was less than 3%, while that for noise was less than 1%. For the uniformity measured at the center, the CV was less than 10%, while the CV was less than 5% for the uniformity measured at the edge. The low-contrast resolution varied the most at all exposure settings investigated with CV between 6 - 13%. Finally, the CV for the spatial resolution parameters was less than 5%. The method described to test for the correct alignment of the PID with the primary collimator was found to be practical and easy to interpret manually. The tests described here were implemented for a specific sensor and X-ray tube combination, but the methods could easily be adapted for different systems by simply adjusting certain parameters.
248

Image quality assessment of High Dynamic Range and Wide Color Gamut images / Estimation de la qualité d’image High Dynamic Range et Wide Color Gamut

Rousselot, Maxime 20 September 2019 (has links)
Ces dernières années, les technologies d’écran se sont considérablement améliorées. Par exemple, le contraste des écrans à plage dynamique élevée (HDR) dépasse de loin la capacité d’un écran conventionnel. De plus, un écran à gamut de couleur étendu (WCG) peut couvrir un espace colorimétrique plus grand que jamais. L'évaluation de la qualité de ces nouveaux contenus est devenue un domaine de recherche actif, les métriques de qualité SDR classiques n'étant pas adaptées. Cependant, les études les plus récentes négligent souvent une caractéristique importante: les chrominances. En effet, les bases de données existantes contiennent des images HDR avec un gamut de couleur standard, négligeant ainsi l’augmentation de l’espace colorimétrique due au WCG et les artefacts chromatiques. La plupart des mesures de qualité HDR objectives non plus ne prennent pas en compte ces artefacts. Pour surmonter cette problématique, dans cette thèse, nous proposons deux nouvelles bases de données HDR/WCG annotés avec des scores subjectifs présentant des artefacts chromatique réaliste. En utilisant ces bases de données, nous explorons trois solutions pour créer des métriques HDR/WCG: l'adaptation des métrics de qualité SDR, l’extension colorimétrique d’une métrique HDR connue appelée HDR-VDP-2 et, enfin, la fusion de diverses métriques de qualité et de features colorimétriques. Cette dernière métrique présente de très bonnes performances pour prédire la qualité tout en étant sensible aux distorsions chromatiques. / To improve their ability to display astonishing images, screen technologies have been greatly evolving. For example, the contrast of high dynamic range rendering systems far exceed the capacity of a conventional display. Moreover, a Wide Color gamut display can cover a bigger color space than ever. Assessing the quality of these new content has become an active field of research as classical SDR quality metrics are not adapted. However, state-of-the-art studies often neglect one important image characteristics: chrominances. Indeed, previous databases contain HDR images with a standard gamut thus neglecting the increase of color space due to WCG. Due to their gamut, these databases are less prone to contain chromatic artifacts than WCG content. Moreover, most existing HDR objective quality metrics only consider luminance and are not considering chromatic artifacts. To overcome this problematic, in this thesis, we have created two HDR / WCG databases with annotated subjective scores. We focus on the creation of a realistic chromatic artifacts that can arise during compression. In addition, using these databases, we explore three solutions to create HDR / WCG metrics. First, we propose a method to adapt SDR metrics to HDR / WCG content. Then, we proposed an extension of a well-known HDR metric called HDR-VDP-2. Finally, we create a new metric based on the merger of various quality metric and color features. This last metric presents very good performance to predict quality while being sensitive to chromatic distortion.
249

Metodický přístup k evaluaci výpočtů transportu světla / A Methodical Approach to the Evaluation of Light Transport Computations

Tázlar, Vojtěch January 2020 (has links)
Photorealistic rendering has a wide variety of applications, and so there are many rendering algorithms and their variations tailored for specific use cases. Even though practically all of them do physically-based simulations of light transport, their results on the same scene are often different - sometimes because of the nature of a given algorithm or in a worse case because of bugs in their implementation. It is difficult to compare these algorithms, especially across different rendering frameworks, because there is not any standardized testing software or dataset available. Therefore, the only way to get an unbiased comparison of algorithms is to create and use your dataset or reimplement the algorithms in one rendering framework of choice, but both solutions can be difficult and time-consuming. We address these problems with our test suite based on a rigorously defined methodology of evaluation of light transport algorithms. We present a scripting framework for automated testing and fast comparison of rendering results and provide a documented set of non-volumetric test scenes for most popular research-oriented render- ing frameworks. Our test suite is easily extensible to support additional renderers and scenes. 1
250

Evaluating Response Images From Protein Quantification

Engström, Mathias, Olby, Erik January 2020 (has links)
Gyros Protein Technologies develops instruments for automated immunoassays. Fluorescent antibodies are added to samples and excited with a laser. This results in a 16-bit image where the intensity is correlated to concentration of bound antibody. Artefacts may appear on the images due to dust, fibers or other problems, which affect the quantification. This project seeks to automatically detect such artifacts by classifying the images as good or bad using Deep Convolutional Neural Networks (DCNNs). To augment the dataset a simulation approach is used and a simulation program is developed that generates images based on developed simulation models. Several classification models are tested as well as different techniques used for training. The highest performing classifier is a VGG16 DCNN, pre-trained on simulated images, which reaches 94.8% accuracy. There are many sub-classes in the bad class, and many of these are very underrepresented in both the training and test datasets. This means that not much can be said of the classification power of these sub-classes. The conclusion is therefore that until more of this rare data can be collected, focus should lie on classifying the other more common examples. Using the approaches from this project, we believe this could result in a high performing product.

Page generated in 0.0341 seconds