• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 101
  • 50
  • 20
  • 6
  • 5
  • 5
  • 4
  • 3
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 219
  • 59
  • 41
  • 33
  • 30
  • 27
  • 26
  • 25
  • 25
  • 25
  • 24
  • 20
  • 19
  • 19
  • 19
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
91

Metody pro zjišťování podobnosti obrazů / Methods for Determining the Similarity of Images

Jandera, Pavel January 2012 (has links)
Thesis in theoretical part deals with the procedures used in image databases searching. There are discussed two basic possible approaches - text based searching and content based searching. In next section there are described methods for image similarity detection. Practical part deals with detailed description and implementation of three selected image features used for image searching. In third part there are presented testing procedure for implemented algorithms and test results. In conclusion implementation of Rapidminer operator are described. This operator uses all implemented algorithms and allows image similarity matching, searching for most similar images in database, and copy these images to output folder.
92

Měření výšky postavy v obraze / Height Measurement in Digital Image

Olejár, Adam January 2015 (has links)
The aim of this paper is a summary of the theory necessary for a modification, detection of person and the height calculation of the detected person in the image. These information were then used for implementation of the algoritm. The first half reveals teoretical problems and solutions. Shows the basic methods of image preprocessing and discusses the basic concepts of plane and projective geometry and transformations. Then describes the distortion, that brings into the picture imperfections of optical systems of cameras and the possibilities of removing them. Explains HOG algorithm and the actual method of calculating height of person detected in the image. The second half describes algoritm structure and statistical evaluation.
93

Vyhledávání objektů v obraze na základě předlohy / Image object detection using template

Novák, Pavel January 2014 (has links)
This Thesis is focused to Image Object Detection using Template. Main Benefit of this Work is a new Method for sympthoms extraction from Histogram of Oriented Gradients using set of Comparators. In this used Work Methods of Image comparing and Sympthoms extraction are described. Main Part is given to Histogram of Oriented Gradients Method. We came out from this Method. In this Work is used small training Data Set (100 pcs.) verified by X-Validation, followed by tests on real Sceneries. Achieved success Rate using X-Validation is 98%. for SVM Algorithm.
94

Detekce, lokalizace a rozpoznání dopravních značek / Detection, Localization and Recognition of Traffic Signs

Svoboda, Tomáš January 2011 (has links)
This master's thesis deals with the localization, detection and recognition of traffic signs. The possibilities of selection of areas with possible traffic signs occurrence are analysed. The properties of different kinds of features used for traffic signs recognition are described next. It focuses on the features based on histogram of oriented gradients. Some possible classifiers are discussed, in the first place the cascade of support vector machines, which are used in resulting system. A description of the system implementation and data sets for 5 types of traffic signs is part of this thesis. Many experiments were accomplished with created system. The results of the experiments are very good. New datasets were acquired from approximately 9 hours of processed video sequences. There are about 13 500 images in these datasets.
95

An evaluation of image preprocessing for classification of Malaria parasitization using convolutional neural networks / En utvärdering av bildförbehandlingsmetoder för klassificering av malariaparasiter med hjälp av Convolutional Neural Networks

Engelhardt, Erik, Jäger, Simon January 2019 (has links)
In this study, the impact of multiple image preprocessing methods on Convolutional Neural Networks (CNN) was studied. Metrics such as accuracy, precision, recall and F1-score (Hossin et al. 2011) were evaluated. Specifically, this study is geared towards malaria classification using the data set made available by the U.S. National Library of Medicine (Malaria Datasets n.d.). This data set contains images of thin blood smears, where uninfected and parasitized blood cells have been segmented. In the study, 3 CNN models were proposed for the parasitization classification task. Each model was trained on the original data set and 4 preprocessed data sets. The preprocessing methods used to create the 4 data sets were grayscale, normalization, histogram equalization and contrast limited adaptive histogram equalization (CLAHE). The impact of CLAHE preprocessing yielded a 1.46% (model 1) and 0.61% (model 2) improvement over the original data set, in terms of F1-score. One model (model 3) provided inconclusive results. The results show that CNN’s can be used for parasitization classification, but the impact of preprocessing is limited. / I denna studie studerades effekten av flera bildförbehandlingsmetoder på Convolutional Neural Networks (CNN). Mätvärden såsom accuracy, precision, recall och F1-score (Hossin et al. 2011) utvärderades. Specifikt är denna studie inriktad på malariaklassificering med hjälp av ett dataset som tillhandahålls av U.S. National Library of Medicine (Malaria Datasets n.d.). Detta dataset innehåller bilder av tunna blodutstryk, med segmenterade oinfekterade och parasiterade blodceller. I denna studie föreslogs 3 CNN-modeller för parasiteringsklassificeringen. Varje modell tränades på det ursprungliga datasetet och 4 förbehandlade dataset. De förbehandlingsmetoder som användes för att skapa de 4 dataseten var gråskala, normalisering, histogramutjämning och kontrastbegränsad adaptiv histogramutjämning (CLAHE). Effekten av CLAHE-förbehandlingen gav en förbättring av 1.46% (modell 1) och 0.61% (modell 2) jämfört med det ursprungliga datasetet, vad gäller F1-score. En modell (modell 3) gav inget resultat. Resultaten visar att CNN:er kan användas för parasiteringsklassificering, men effekten av förbehandling är begränsad.
96

The effect of noise filters on DVS event streams : Examining background activity filters on neuromorphic event streams / Brusreduceringens inverkan på synsensorer : En studie kring brusreduceringens inverkan på händelseströmmar ifrån neuromorfiska synsensorer

Trogadas, Giorgos, Ekonoja, Larissa January 2021 (has links)
Image classification using data from neuromorphic vision sensors is a challenging task that affects the use of dynamic vision sensor cameras in real- world environments. One impeding factor is noise in the neuromorphic event stream, which is often generated by the dynamic vision sensors themselves. This means that effective noise filtration is key to successful use of event- based data streams in real-world applications. In this paper we harness two feature representations of neuromorphic vision data in order to apply conventional frame-based image tools on the neuromorphic event stream. We use a standard noise filter to evaluate the effectiveness of noise filtration using a popular dataset converted to neuromorphic vision data. The two feature representations are the best-of-class standard Histograms of Averaged Time Surfaces (HATS) and a simpler grid matrix representation. To evaluate the effectiveness of the noise filter, we compare classification accuracies using various noise filter windows at different noise levels by adding additional artificially generated Gaussian noise to the dataset. Our performance metrics are reported as classification accuracy. Our results show that the classification accuracy using frames generated with HATS is not significantly improved by a noise filter. However, the classification accuracy of the frames generated with the more traditional grid representation is improved. These results can be refined and tuned for other datasets and may eventually contribute to on- the- fly noise reduction in neuromorphic vision sensors. / Händelsekameror är en ny typ av kamera som registrerar små ljusförändringar i kamerans synfält. Sensorn som kameran bygger på är modellerad efter näthinnan som finns i våra ögon. Näthinnan är uppbyggd av tunna lager av celler som omvandlar ljus till nervsignaler. Eftersom synsensorer efterliknar nervsystemet har de getts namnet neuromorfiska synsensorer. För att registrera små ljusförändringar måste dessa sensorer vara väldigt känsliga vilket även genererar ett elektroniskt brus. Detta brus försämrar kvalitén på signalen vilket blir en förhindrande faktor när dessa synsensorer ska användas i praktiken och ställer stora krav på att hitta effektiva metoder för brusredusering. Denna avhandling undersöker två typer av digitala framställningar som omvandlar signalen ifrån händelsekameror till något som efterliknar vanliga bilder som kan användas med traditionella metoder för bildigenkänning. Vi undersöker brusreduseringens inverkan på den övergripande noggrannhet som uppnås av en artificiell intelligens vid bildigenkänning. För att utmana AIn har vi tillfört ytterligare normalfördelat brus i signalen. De digitala framställningar som används är dels histogram av genomsnittliga tidsytor (eng. histograms of averaged time surfaces) och en matrisrepresentation. Vi visar att HATS är robust och klarar av att generera digitala framställningar som tillåter AIn att bibehålla god noggrannhet även vid höga nivåer av brus, vilket medför att brusreduseringens inverkan var försumbar. Matrisrepresentationen gynnas av brusredusering vid högre nivåer av brus.
97

Fluctuations, Phase Separation and Wetting Films near Liquid-Gas Critical Point

Oprisan, Ana 22 May 2006 (has links)
Gravity on Earth limits the study of the properties of pure fluids near critical point because they become stratified under their own weight. Near the critical point, all thermodynamic properties either diverge or converge and the heating and cooling cause instabilities of the convective flow as a consequence of the expansibility divergence. In order to study boiling, fluctuation and phase separation processes near the critical point of pure fluids without the influence of the Earth's gravity, a number of experiments were performed in the weightlessness of Mir space station. The experimental setup called ALICE II instrument was designed to suppress sedimentation and buoyancy-driven flow. Another set of experiments were carried out on Earth using a carefully density matched system of deuterated methanolcycloxexane to observe critical fluctuations directly. The set of experiments performed on board of Mir space station studied boiling and wetting film dynamics during evaporation near the critical point of two pure fluids (sulfur hexafluoride and carbon dioxide) using a defocused grid method. The specially designed cell containing the pure fluid was heated and, as a result, a low contrast line appeared on the wetting film that corresponded to a sharp change in the thickness of the film. A large mechanical response was observed in response to the cell heating and we present quantitative results about the receding contact lines. It is found that the vapor recoil force is responsible for the receding contact line. Local density fluctuations were observed by illuminating a cylindrical cell filled with the pure fluid near its liquid- gas critical point and recorded using a microscope and a video recorder. Microscopic fluctuations were analyzed both in sulfur hexafluoride and in a binary mixture of methanol cyclohexane. Using image processing techniques, we were able to estimate the properties of the fluid from the recorded images showing fluctuations of the transmitted and scattered light. We found that the histogram of an image can be fitted to a Gaussian relationship and by determining its width we were able to estimate the position of the critical point. The characteristic length of the fluctuations corresponding to the maximum of the radial average of the power spectrum was also estimated. The power law growth for the early stage of the phase separation was determined for two different temperature quenches in pure fluid and these results are in agreement with other experimental results and computational simulations.
98

Agrégation de modèles en apprentissage statistique pour l'estimation de la densité et la classification multiclasse / Aggregate statistical learning methods for density estimation and multiclass problems

Bourel, Mathias 31 October 2013 (has links)
Les méthodes d'agrégation en apprentissage statistique combinent plusieurs prédicteurs intermédiaires construits à partir du même jeu de données dans le but d'obtenir un prédicteur plus stable avec une meilleure performance. Celles-ci ont été amplement étudiées et ont données lieu à plusieurs travaux, théoriques et empiriques dans plusieurs contextes, supervisés et non supervisés. Dans ce travail nous nous intéressons dans un premier temps à l'apport de ces méthodes au problème de l'estimation de la densité. Nous proposons plusieurs estimateurs simples obtenus comme combinaisons linéaires d'histogrammes. La principale différence entre ceux-ci est quant à la nature de l'aléatoire introduite à chaque étape de l'agrégation. Nous comparons ces techniques à d'autres approches similaires et aux estimateurs classiques sur un choix varié de modèles, et nous démontrons les propriétés asymptotiques pour un de ces algorithmes (Random Averaged Shifted Histogram). Une seconde partie est consacrée aux extensions du Boosting pour le cas multiclasse. Nous proposons un nouvel algorithme (Adaboost.BG) qui fournit un classifieur final en se basant sur un calcul d'erreur qui prend en compte la marge individuelle de chaque modèle introduit dans l'agrégation. Nous comparons cette méthode à d'autres algorithmes sur plusieurs jeu de données artificiels classiques. / Ensemble methods in statistical learning combine several base learners built from the same data set in order to obtain a more stable predictor with better performance. Such methods have been extensively studied in the supervised context for regression and classification. In this work we consider the extension of these approaches to density estimation. We suggest several new algorithms in the same spirit as bagging and boosting. We show the efficiency of combined density estimators by extensive simulations. We give also the theoretical results for one of our algorithms (Random Averaged Shifted Histogram) by mean of asymptotical convergence under milmd conditions. A second part is devoted to the extensions of the Boosting algorithms for the multiclass case. We propose a new algorithm (Adaboost.BG) accounting for the margin of the base classifiers and show its efficiency by simulations and comparing it to the most used methods in this context on several datasets from the machine learning benchmark. Partial theoretical results are given for our algorithm, such as the exponential decrease of the learning set misclassification error to zero.
99

COMPARING AND IMPROVING FACIAL RECOGNITION METHOD

Sierra, Brandon Luis 01 September 2017 (has links)
Facial recognition is the process in which a sample face can be correctly identified by a machine amongst a group of different faces. With the never-ending need for improvement in the fields of security, surveillance, and identification, facial recognition is becoming increasingly important. Considering this importance, it is imperative that the correct faces are recognized and the error rate is as minimal as possible. Despite the wide variety of current methods for facial recognition, there is no clear cut best method. This project reviews and examines three different methods for facial recognition: Eigenfaces, Fisherfaces, and Local Binary Patterns to determine which method has the highest accuracy of prediction rate. The three methods are reviewed and then compared via experiments. OpenCV, CMake, and Visual Studios were used as tools to conduct experiments. Analysis were conducted to identify which method has the highest accuracy of prediction rate with various experimental factors. By feeding a number of sample images of different people which serve as experimental subjects. The machine is first trained to generate features for each person among the testing subjects. Then, a new image was tested against the “learned” data and be labeled as one of the subjects. With experimental data analysis, the Eigenfaces method was determined to have the highest prediction rate of the three algorithms tested. The Local Binary Pattern Histogram (LBP) was found to have the lowest prediction rate. Finally, LBP was selected for the algorithm improvement. In this project, LBP was improved by identifying the most significant regions of the histograms for each person in training time. The weights of each region are assigned depending on the gray scale contrast. At recognition time, given a new face, different weights are assigned to different regions to increase prediction rate and also speed up the real time recognition. The experimental results confirmed the performance improvement.
100

Tumour Control and Normal Tissue Complication Probabilities: Can they be correlated with the measured clinical outcomes of prostate cancer radiotherapy?

Hornby, Colin, n/a January 2006 (has links)
The chief aim in developing radiation treatment plans is to maximise tumour cell kill while minimising the killing of normal cells. The acceptance by a radiation oncologist of a radiation therapy treatment plan devised by the radiation therapist, at present is largely based on the oncologists' previous clinical experience with reference to established patterns of treatment and their clinical interpretation of the dose volume histogram. Some versions of radiotherapy planning computer software now incorporate a function that permits biologically based predictions about the probability of tumour control (TCP) and/or normal tissue complications (NTCP). The biological models used for these probabilities are founded upon statistical and mathematical principles as well as radiobiology concepts. TCP and NTCP potentially offer the capability of being able to better optimise treatments for an individual patient's tumour and normal anatomy. There have been few attempts in the past to correlate NTCPs to actual treatment complications, and the reported complications have generally not shown any significant correlation. Thus determining whether either or both NTCPs and TCPs could be correlated with the observed clinical outcomes of prostate radiotherapy is the central topic of this thesis. In this research, TCPs and NTCPs were prospectively calculated for prostate cancer patients receiving radiation therapy, and subsequently assessed against the clinical results of the delivered treatments. This research was conducted using two different types of NTCP models, which were correlated against observed treatment-induced complications in the rectum and bladder. The two NTCP models were also compared to determine their relative efficacy in predicting the recorded toxicities. As part of this research the refinement of some of the published bladder parameters required for NTCP calculations was undertaken to provide a better fit between predicted and observed complication rates for the bladder wall which was used in this research. TCPs were also calculated for each patient using the best available estimate of the radiosensitivity of the prostate gland from recent research. The TCP/NTCP data was analysed to determine if any correlations existed between the calculated probabilities and the observed clinical data. The results of the analyses showed that a correlation between the NTCP and a limited number of toxicities did occur. Additionally the NTCP predictions were compared to existing parameters and methods for radiotherapy plan evaluation - most notably DVHs. It is shown that NTCPs can provide superior discriminatory power when utilised for prospective plan evaluation. While the TCP could not be correlated with clinical outcomes due to insufficient follow-up data, it is shown that there was a correlation between the TCP and the treatment technique used.

Page generated in 0.0505 seconds