• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 14
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 20
  • 20
  • 11
  • 8
  • 8
  • 6
  • 4
  • 4
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Contrast Enhancement of Colour Images using Transform Based Gamma Correction and Histogram Equalization

Gatti, Pruthvi Venkatesh, Velugubantla, Krishna Teja January 2017 (has links)
Contrast is an important factor in any subjective evaluation of image quality. It is the difference in visual properties that makes an object distinguishable from other objects and background. Contrast Enhancement method is mainly used to enhance the contrast in the image by using its Histogram. Histogram is a distribution of numerical data in an image using graphical representation. Histogram Equalization is widely used in image processing to adjust the contrast in the image using histograms. Whereas Gamma Correction is often used to adjust luminance in an image. By combining Histogram Equalization and Gamma Correction we proposed a hybrid method, that is used to modify the histograms and enhance contrast of an image in a digital method. Our proposed method deals with the variants of histogram equalization and transformed based gamma correction. Our method is an automatically transformation technique that improves the contrast of dimmed images via the gamma correction and probability distribution of luminance pixels. The proposed method is converted into an android application. We succeeded in enhancing the contrast of an image by using our method and we have tested for different alpha values. Graphs of the gamma for different alpha values are plotted.
12

Classification of Dense Masses in Mammograms

Naram, Hari Prasad 01 May 2018 (has links) (PDF)
This dissertation material provided in this work details the techniques that are developed to aid in the Classification of tumors, non-tumors, and dense masses in a Mammogram, certain characteristics such as texture in a mammographic image are used to identify the regions of interest as a part of classification. Pattern recognizing techniques such as nearest mean classifier and Support vector machine classifier are also used to classify the features. The initial stages include the processing of mammographic image to extract the relevant features that would be necessary for classification and during the final stage the features are classified using the pattern recognizing techniques mentioned above. The goal of this research work is to provide the Medical Experts and Researchers an effective method which would aid them in identifying the tumors, non-tumors, and dense masses in a mammogram. At first the breast region extraction is carried using the entire mammogram. The extraction is carried out by creating the masks and using those masks to extract the region of interest pertaining to the tumor. A chain code is employed to extract the various regions, the extracted regions could potentially be classified as tumors, non-tumors, and dense regions. Adaptive histogram equalization technique is employed to enhance the contrast of an image. After applying the adaptive histogram equalization for several times which will provide a saturated image which would contain only bright spots of the mammographic image which appear like dense regions of the mammogram. These dense masses could be potential tumors which would need treatment. Relevant Characteristics such as texture in the mammographic image are used for feature extraction by using the nearest mean and support vector machine classifier. A total of thirteen Haralick features are used to classify the three classes. Support vector machine classifier is used to classify two class problems and radial basis function (RBF) kernel is used to find the best possible (c and gamma) values. Results obtained in this research suggest the best classification accuracy was achieved by using the support vector machines for both Tumor vs Non-Tumor and Tumor vs Dense masses. The maximum accuracies achieved for the tumor and non-tumor is above 90 % and for the dense masses is 70.8% using 11 features for support vector machines. Support vector machines performed better than the nearest mean majority classifier in the classification of the classes. Various case studies were performed using two distinct datasets in which each dataset consisting of 24 patients’ data in two individual views. Each patient data will consist of both the cranio caudal view and medio lateral oblique views. From these views the region of interest which could possibly be a tumor, non-tumor, or a dense regions(mass).
13

An evaluation of image preprocessing for classification of Malaria parasitization using convolutional neural networks / En utvärdering av bildförbehandlingsmetoder för klassificering av malariaparasiter med hjälp av Convolutional Neural Networks

Engelhardt, Erik, Jäger, Simon January 2019 (has links)
In this study, the impact of multiple image preprocessing methods on Convolutional Neural Networks (CNN) was studied. Metrics such as accuracy, precision, recall and F1-score (Hossin et al. 2011) were evaluated. Specifically, this study is geared towards malaria classification using the data set made available by the U.S. National Library of Medicine (Malaria Datasets n.d.). This data set contains images of thin blood smears, where uninfected and parasitized blood cells have been segmented. In the study, 3 CNN models were proposed for the parasitization classification task. Each model was trained on the original data set and 4 preprocessed data sets. The preprocessing methods used to create the 4 data sets were grayscale, normalization, histogram equalization and contrast limited adaptive histogram equalization (CLAHE). The impact of CLAHE preprocessing yielded a 1.46% (model 1) and 0.61% (model 2) improvement over the original data set, in terms of F1-score. One model (model 3) provided inconclusive results. The results show that CNN’s can be used for parasitization classification, but the impact of preprocessing is limited. / I denna studie studerades effekten av flera bildförbehandlingsmetoder på Convolutional Neural Networks (CNN). Mätvärden såsom accuracy, precision, recall och F1-score (Hossin et al. 2011) utvärderades. Specifikt är denna studie inriktad på malariaklassificering med hjälp av ett dataset som tillhandahålls av U.S. National Library of Medicine (Malaria Datasets n.d.). Detta dataset innehåller bilder av tunna blodutstryk, med segmenterade oinfekterade och parasiterade blodceller. I denna studie föreslogs 3 CNN-modeller för parasiteringsklassificeringen. Varje modell tränades på det ursprungliga datasetet och 4 förbehandlade dataset. De förbehandlingsmetoder som användes för att skapa de 4 dataseten var gråskala, normalisering, histogramutjämning och kontrastbegränsad adaptiv histogramutjämning (CLAHE). Effekten av CLAHE-förbehandlingen gav en förbättring av 1.46% (modell 1) och 0.61% (modell 2) jämfört med det ursprungliga datasetet, vad gäller F1-score. En modell (modell 3) gav inget resultat. Resultaten visar att CNN:er kan användas för parasiteringsklassificering, men effekten av förbehandling är begränsad.
14

Mobile Application Development with Image Applications Using Xamarin

GAJJELA, VENKATA SARATH, DUPATI, SURYA DEEPTHI January 2018 (has links)
Image enhancement improves an image appearance by increasing dominance of some features or by decreasing ambiguity between different regions of the image. Image enhancement techniques have been widely used in many applications of image processing where the subjective quality of images is important for human interpretation. In many cases, the images have lack of clarity and have some effects on images due to fog, low light and other daylight effects exist. So, the images which have these scenarios should be enhanced and made clear to recognize the objects clearly. Histogram-based image enhancement technique is mainly based on equalizing the histogram of the image and increasing the dynamic range corresponding to the image. The Histogram equalization algorithm was performed and tested using different images facing the low light, fog images and colour contrast and succeeded in obtaining enhanced images. This technique is implemented by averaging the histogram values as the probability density function. Initially, we have worked with the MATLAB code on Histogram Equalization and made changes to implement an Application Program Interface i.e., API using Xamarin software. The mobile application developed using Xamarin software works efficiently and has less execution time when compared to the application developed in Android Studio. Debugging of the application is successfully done in both Android and IOS versions. The focus of this thesis is to develop a mobile application on Image enhancement using Xamarin on low light, foggy images.
15

Adaptivní filtry pro 2-D a 3-D zpracování digitálních obrazů / Adaptive Filters for 2-D and 3-D Digital Images Processing

Martišek, Karel January 2012 (has links)
Práce se zabývá adaptivními filtry pro vizualizaci obrazů s vysokým rozlišením. V teoretické části je popsán princip činnosti konfokálního mikroskopu a matematicky korektně zaveden pojem digitální obraz. Pro zpracování obrazů je volen jak frekvenční přístup (s využitím 2-D a 3-D diskrétní Fourierovy transformace a frekvenčních filtrů), tak přístup pomocí digitální geometrie (s využitím adaptivní ekvalizace histogramu s adaptivním okolím). Dále jsou popsány potřebné úpravy pro práci s neideálními obrazy obsahujícími aditivní a impulzní šum. Závěr práce se věnuje prostorové rekonstrukci objektů na základě jejich optických řezů. Veškeré postupy a algoritmy jsou i prakticky zpracovány v softwaru, který byl vyvinut v rámci této práce.
16

Real-time facial expression analysis : a thesis presented in partial fulfillment of the requirements for the degree of Doctor of Philosophy (Ph.D.) in Computer Science at Massey University, Auckland, New Zealand

Fan, Chao January 2008 (has links)
As computers have become more and more advanced, with even the most basic computer capable of tasks almost unimaginable only a decade ago, researchers and developers are focusing on improving the way that computers interact with people in their everyday lives. A core goal, therefore, is to develop a computer system which can understand and react appropriately to natural human behavior. A key requirement for such a system is the ability to automatically, and in real time, recognises human facial expressions. In addition, this must be successfully achieved regardless of the inherent differences in human faces or variations in lighting and other external conditions. The focus of this research was to develop such a system by evaluating and then utilizing the most appropriate of the many image processing techniques currently available, and, where appropriate, developing new methodologies and algorithms. The first key step in the system is to recognise a human face with acceptable levels of misses and false positives. This research analysed and evaluated a number of different face detection techniques, before developing a novel algorithm which combined phase congruency and template matching techniques. This novel algorithm provides key advantages over existing techniques because it can detect faces rotated to any angle, and it works in real time. Existing techniques could only recognise faces which were rotated less than 10 degrees (in either direction) and most could not work in real time due to excessive computational power requirements. The next step for the system is to enhance and extract the facial features. To successfully achieve the stated goal, the enhancement and extraction of the facial features must reduce the number of facial dimensions to ensure the system can operate in real time, as well as providing sufficient clear and detailed features to allow the facial expressions to be accurately recognised. This part of the system was successfully completed by developing a novel algorithm based on the existing Contrast Limited Adaptive Histogram Equalization technique which quickly and accurately represents facial features, and developing another novel algorithm which reduces the number of feature dimensions by combining radon transformation and fast Fourier transformation techniques, ensuring real time operation is possible. The final step for the system is to use the information provided by the first two steps to accurately recognise facial expressions. This is achieved using an SVM trained using a database including both real and computer generated facial images with various facial expressions. The system developed during this research can be utilised in a number of ways, and, most significantly, has the potential to revolutionise future interactions between humans and computers by assisting these reactions to become natural and intuitive. In addition, individual components of the system also have significant potential, with, for example, the algorithms which allow the recognition of an object regardless of its rotation under consideration as part of a project aiming to achieve non-invasive detection of early stage cancer cells.
17

使用適應性直方圖均衡化之加速與風格化淺浮雕生成 / Fast and stylized bas-relief generation using adaptive histogram equalization

黃嗣心, Huang, Ssu Shin Unknown Date (has links)
浮雕是雕刻藝術中重要的表現方法,藉由在平板上雕刻出高低落差,傳達出豐富的形狀視覺線索,是介於3D雕塑和2D畫作中間的一種物體外形的表現方式。本論文將針對淺浮雕這類型相對高度較低的浮雕技法,將要表達的3D場景壓縮到接近平面但盡可能保留細節。我們使用適應性直方圖均衡化技術去壓縮高度的動態範圍並盡可能強化細節,且經由降低取樣點數量的技巧加速適應性直方圖均衡化的計算,以利於使用者進行互動性自訂風格化。另外依照場景特徵的流向,增加特殊的刻紋去豐富淺浮雕的風格表現。 / Relief is a sculptural technique to express the shape feature on a flat surface. It is an art medium between 3D sculpture and 2D painting. In this thesis, we focus on bas-relief, which is a relatively low relief to compress the depth of 3D scene to a shallow overall depth and preserve details of the shape. We use the adaptive histogram equalization (AHE) to compress the depth range and enhance details, and accelerate the AHE computation by sample reduction, which is in favor of the user interaction of custom stylization. Furthermore, adding special carving patterns according to feature flows of the scene enriches the stylization of the relief generation.
18

Adaptive Filters for 2-D and 3-D Digital Images Processing / Adaptive Filters for 2-D and 3-D Digital Images Processing

Martišek, Karel January 2012 (has links)
Práce se zabývá adaptivními filtry pro vizualizaci obrazů s vysokým rozlišením. V teoretické části je popsán princip činnosti konfokálního mikroskopu a matematicky korektně zaveden pojem digitální obraz. Pro zpracování obrazů je volen jak frekvenční přístup (s využitím 2-D a 3-D diskrétní Fourierovy transformace a frekvenčních filtrů), tak přístup pomocí digitální geometrie (s využitím adaptivní ekvalizace histogramu s adaptivním okolím). Dále jsou popsány potřebné úpravy pro práci s neideálními obrazy obsahujícími aditivní a impulzní šum. Závěr práce se věnuje prostorové rekonstrukci objektů na základě jejich optických řezů. Veškeré postupy a algoritmy jsou i prakticky zpracovány v softwaru, který byl vyvinut v rámci této práce.
19

Modelos de compressão de dados para classificação e segmentação de texturas

Honório, Tatiane Cruz de Souza 31 August 2010 (has links)
Made available in DSpace on 2015-05-14T12:36:26Z (GMT). No. of bitstreams: 1 parte1.pdf: 2704137 bytes, checksum: 1bc9cc5c3099359131fb11fa1878c22f (MD5) Previous issue date: 2010-08-31 / Coordenação de Aperfeiçoamento de Pessoal de Nível Superior - CAPES / This work analyzes methods for textures images classification and segmentation using lossless data compression algorithms models. Two data compression algorithms are evaluated: the Prediction by Partial Matching (PPM) and the Lempel-Ziv-Welch (LZW) that had been applied in textures classification in previous works. The textures are pre-processed using histogram equalization. The classification method is divided into two stages. In the learning stage or training, the compression algorithm builds statistical models for the horizontal and the vertical structures of each class. In the classification stage, samples of textures to be classified are compressed using models built in the learning stage, sweeping the samples horizontally and vertically. A sample is assigned to the class that obtains the highest average compression. The classifier tests were made using the Brodatz textures album. The classifiers were tested for various contexts sizes (in the PPM case), samples number and training sets. For some combinations of these parameters, the classifiers achieved 100% of correct classifications. Texture segmentation process was made only with the PPM. Initially, the horizontal models are created using eight textures samples of size 32 x 32 pixels for each class, with the PPM context of a maximum size 1. The images to be segmented are compressed by the models of classes, initially in blocks of size 64 x 64 pixels. If none of the models achieve a compression ratio at a predetermined interval, the block is divided into four blocks of size 32 x 32. The process is repeated until a model reach a compression ratio in the range of the compression ratios set for the size of the block in question. If the block get the 4 x 4 size it is classified as belonging to the class of the model that reached the highest compression ratio. / Este trabalho se propõe a analisar métodos de classificação e segmentação de texturas de imagens digitais usando algoritmos de compressão de dados sem perdas. Dois algoritmos de compressão são avaliados: o Prediction by Partial Matching (PPM) e o Lempel-Ziv-Welch (LZW), que já havia sido aplicado na classificação de texturas em trabalhos anteriores. As texturas são pré-processadas utilizando equalização de histograma. O método de classificação divide-se em duas etapas. Na etapa de aprendizagem, ou treinamento, o algoritmo de compressão constrói modelos estatísticos para as estruturas horizontal e vertical de cada classe. Na etapa de classificação, amostras de texturas a serem classificadas são comprimidas utilizando modelos construídos na etapa de aprendizagem, varrendo-se as amostras na horizontal e na vertical. Uma amostra é atribuída à classe que obtiver a maior compressão média. Os testes dos classificadores foram feitos utilizando o álbum de texturas de Brodatz. Os classificadores foram testados para vários tamanhos de contexto (no caso do PPM), amostras e conjuntos de treinamento. Para algumas das combinações desses parâmetros, os classificadores alcançaram 100% de classificações corretas. A segmentação de texturas foi realizada apenas com o PPM. Inicialmente, são criados os modelos horizontais usados no processo de segmentação, utilizando-se oito amostras de texturas de tamanho 32 x 32 pixels para cada classe, com o contexto PPM de tamanho máximo 1. As imagens a serem segmentadas são comprimidas utilizando-se os modelos das classes, inicialmente, em blocos de tamanho 64 x 64 pixels. Se nenhum dos modelos conseguir uma razão de compressão em um intervalo pré-definido, o bloco é dividido em quatro blocos de tamanho 32 x 32. O processo se repete até que algum modelo consiga uma razão de compressão no intervalo de razões de compressão definido para o tamanho do bloco em questão, podendo chegar a blocos de tamanho 4 x 4 quando o bloco é classificado como pertencente à classe do modelo que atingiu a maior taxa de compressão.
20

Ανάπτυξη τεχνικών επεξεργασίας ιατρικών δεδομένων και συστημάτων υποστήριξης της διάγνωσης στη γυναικολογία

Βλαχοκώστα, Αλεξάνδρα 25 May 2015 (has links)
Η αυτόματη επεξεργασία εικόνων του ενδομητρίου αποτελεί ένα δύσκολο και πολυδιάστατο πρόβλημα, το οποίο έχει απασχολήσει πλήθος ερευνητών και για το οποίο έχει αναπτυχθεί μεγάλος αριθμός τεχνικών. Στην παρούσα διατριβή, παρουσιάζεται μια μεθοδολογική προσέγγιση, η οποία βασίζεται στη χρήση αλγορίθμων ψηφιακής επεξεργασίας και ανάλυσης εικόνων, για την αυτόματη εκτίμηση χαρακτηριστικών που περιγράφουν την αγγείωση και την υφή εικόνων του ενδομητρίου. Αφορμή της μελέτης αποτελεί ο ρόλος που διαπιστώνεται ότι διαδραματίζει η μεταβολή των τιμών των εν λόγω χαρακτηριστικών στην έγκαιρη διάγνωση των παθήσεων του ενδομητρίου. Στα πλαίσια της διατριβής, υλοποιήθηκε κατάλληλη μεθοδολογία για τον υπολογισμό ενός συνόλου χαρακτηριστικών τόσο για υστεροσκοπικές εικόνες, όσο και για ιστολογικές εικόνες του ενδομητρίου. Ιδιαίτερη βαρύτητα δόθηκε στην προ – επεξεργασία των εικόνων προκειμένου να προκύψει βελτίωση της ποιότητας καθώς και ενίσχυση της αντίθεσης αυτών. Στη συνέχεια, ανιχνεύτηκαν τα σημεία που αποτελούν τους κεντρικούς άξονες των υπό εξέταση αγγείων με χρήση διαφορικού λογισμού για τις υστεροσκοπικές εικόνες και υπολογίστηκε ένα σύνολο χαρακτηριστικών μεγεθών που περιγράφουν την αγγείωση και την υφή των εικόνων τόσο για τις υστεροσκοπικές όσο και για τις ιστολογικές εικόνες. Τέλος, εφαρμόστηκαν κατάλληλοι αλγόριθμοι με σκοπό την κατηγοριοποίηση των υστεροσκοπικών και των ιστολογικών εικόνων και συγκεκριμένα τον διαχωρισμό των παθολογικών και των φυσιολογικών εικόνων του ενδομητρίου. Παράλληλα, χρησιμοποιήθηκε η ROC ανάλυση στην απεικόνιση και ανάλυση της συμπεριφοράς των εν λόγω κατηγοριοποιητών. / Automatic analysis of the endometrial images is a difficult and multidimensional problem. For this reason, the number of papers and techniques regarding this issue is numerous. In this Thesis, a methodology is presented, based on advance image processing techniques in order to automatically estimate texture and vessel’s features in endometrial images. Motivation for the Thesis is the fact that the variation of the measurements of the specific features plays significant role in the seasonable diagnosis of endometrial disorders. Throughout this Thesis, an appropriate methodology is developed in order to estimate the features for the hysteroscopical and histological images of the endometrium. An important step is the pre – processing of the images in order to enhance the image quality and the image contrast. Then, the pixels that constitute the centerlines of vessels are detected by using differential calculus for the hysteroscopical images, only. Furthermore, the texture and vessel’s features in hysteroscopical and histological images are estimated. Finally, appropriate algorithms are applied in order to classify the hysteroscopical and histological images and distinguish pathological and normal endometrial images. ROC analysis is used in order to evaluate the discrimination power of the features that were estimated.

Page generated in 0.4313 seconds