• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1102
  • 257
  • 89
  • 85
  • 75
  • 26
  • 23
  • 21
  • 18
  • 17
  • 14
  • 13
  • 11
  • 7
  • 7
  • Tagged with
  • 2117
  • 524
  • 520
  • 488
  • 435
  • 357
  • 343
  • 317
  • 282
  • 270
  • 269
  • 262
  • 236
  • 180
  • 174
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
331

Feature Extraction From Images of Buildings Using Edge Orientation and Length

Danhall, Viktor January 2012 (has links)
To extract information from a scene captured in digital images where the information represents some kind of feature is an important process in image analysis. Both the speed and the accuracy for this process is very important since many of the analysis applications either require analysis of very large data sets or requires the data to be extracted in real time. Some of those applications could be 2 dimensional as well as 3 dimensional object recognition or motion detection. What this work will focus on is the extraction of salient features from scenes of buildings, using a joint histogram based both the edge orientation and the edge length to aid in the extraction of the relevant features. The results are promising but will need some more refinement work to be used successfully and is therefore quite a bit of reflected theory.
332

Feature Extraction for Image Selection Using Machine Learning / Särdragsextrahering för bildurval vid användande av maskininlärning

Lorentzon, Matilda January 2017 (has links)
During flights with manned or unmanned aircraft, continuous recording can result in avery high number of images to analyze and evaluate. To simplify image analysis and tominimize data link usage, appropriate images should be suggested for transfer and furtheranalysis. This thesis investigates features used for selection of images worthy of furtheranalysis using machine learning. The selection is done based on the criteria of havinggood quality, salient content and being unique compared to the other selected images.The investigation is approached by implementing two binary classifications, one regardingcontent and one regarding quality. The classifications are made using support vectormachines. For each of the classifications three feature extraction methods are performedand the results are compared against each other. The feature extraction methods used arehistograms of oriented gradients, features from the discrete cosine transform domain andfeatures extracted from a pre-trained convolutional neural network. The images classifiedas both good and salient are then clustered based on similarity measures retrieved usingcolor coherence vectors. One image from each cluster is retrieved and those are the resultingimages from the image selection. The performance of the selection is evaluated usingthe measures precision, recall and accuracy. The investigation showed that using featuresextracted from the discrete cosine transform provided the best results for the quality classification.For the content classification, features extracted from a convolutional neuralnetwork provided the best results. The similarity retrieval showed to be the weakest partand the entire system together provides an average accuracy of 83.99%.
333

Face and texture image analysis with quantized filter response statistics

Ahonen, T. (Timo) 18 August 2009 (has links)
Abstract Image appearance descriptors are needed for different computer vision applications dealing with, for example, detection, recognition and classification of objects, textures, humans, etc. Typically, such descriptors should be discriminative to allow for making the distinction between different classes, yet still robust to intra-class variations due to imaging conditions, natural changes in appearance, noise, and other factors. The purpose of this thesis is the development and analysis of photometric descriptors for the appearance of real life images. The two application areas included in this thesis are face recognition and texture classification. To facilitate the development and analysis of descriptors, a general framework for image description using statistics of quantized filter bank responses modeling their joint distribution is introduced. Several texture and other image appearance descriptors, including the local binary pattern operator, can be presented using this model. This framework, within which the thesis is presented, enables experimental evaluation of the significance of each of the components of this three-part chain forming a descriptor from an input image. The main contribution of this thesis is a face representation method using distributions of local binary patterns computed in local rectangular regions. An important factor of this contribution is to view feature extraction from a face image as a texture description problem. This representation is further developed into a more precise model by estimating local distributions based on kernel density estimation. Furthermore, a face recognition method tolerant to image blur using local phase quantization is presented. The thesis presents three new approaches and extensions to texture analysis using quantized filter bank responses. The first two aim at increasing the robustness of the quantization process. The soft local binary pattern operator accomplishes this by making a soft quantization to several labels, whereas Bayesian local binary patterns make use of a prior distribution of labelings, and aim for the one maximizing the a posteriori probability. Third, a novel method for computing rotation invariant statistics from histograms of local binary pattern labels using the discrete Fourier transform is introduced. All the presented methods have been experimentally validated using publicly available image datasets and the results of experiments are presented in the thesis. The face description approach proposed in this thesis has been validated in several external studies, and it has been utilized and further developed by several research groups working on face analysis.
334

Selective feature preserved elastic surface registration in complex geometric morphology

Jansen van Rensburg, G.J. (Gerhardus Jacobus) 22 September 2011 (has links)
Deforming a complex generic shape into a representation of another complex shape is investigated. An initial study is done on the effect of cranial shape variation on masticatory induced stress. A finite element analysis is performed on two different skull geometries. One skull geometry has a prognathic shape, characterised by jaws protruding forward, while the other has a non-prognathic form. Comparing the results of the initial nite element analyses, the effect of an undesired variation in shape and topology on the resulting stress field is observed. This variation in shape and topology can not be attributed to the cranial shape variation that is investigated. This means that the variation in the masticatory induced stress field that is due to the relative degree in prognathism can not be quantified effectively. To best compare results, it would be beneficial to have a computational domain for the different skull geometries that have one-to-one correspondence. An approach to obtain a computational domain that represents various geometries with the exact same mesh size and connectivity between them does exist. This approach involves deforming a generic mesh to represent different target shapes. This report covers an introductory study to register and deform a generic mesh to approximately represent a complex target geometry. Various procedures are investigated, implemented and combined to specifically accommodate complex geometries like that of the human skull. A surface registration procedure is implemented and combined with a feature registration procedure. Feature lines are extracted from the surface representation of each skull as well as the generic shape. These features are compared and an initial deformation is applied to the generic shape to better represent the corresponding features on the target. Selective feature preserved elastic surface registration is performed after the initial feature based registration. Only the registration to surfaces of featureless areas and matched feature areas are allowed along with user selected areas during surface registration. The implemented procedures have various aspects that still require improvement before the desired study regarding prognathism's effect on masticatory induced stress could truly be approached pragmatically. Focus is only given to the use of existing procedures while the additional required improvements could be addressed in future work. It is however required that the resulting discretised domain obtained in this initial study be of sufficient quality to be used in a finite element analysis (FEA). The implemented procedure is illustrated using the two original skull geometries. Symmetric versions of these geometries are generated with a one-to-one correspondence map between them. The skull representations are then used in a finite element analysis to illustrate the appeal of having computational domains with a consistent mapping between them. The variation in the masticatory induced stress field due to the variation in cranial shape is illustrated using the consistent mapping between the geometries as part of this example. / Dissertation (MEng)--University of Pretoria, 2011. / Mechanical and Aeronautical Engineering / unrestricted
335

Automated Feature Engineering for Deep Neural Networks with Genetic Programming

Heaton, Jeff T. 01 January 2017 (has links)
Feature engineering is a process that augments the feature vector of a machine learning model with calculated values that are designed to enhance the accuracy of a model’s predictions. Research has shown that the accuracy of models such as deep neural networks, support vector machines, and tree/forest-based algorithms sometimes benefit from feature engineering. Expressions that combine one or more of the original features usually create these engineered features. The choice of the exact structure of an engineered feature is dependent on the type of machine learning model in use. Previous research demonstrated that various model families benefit from different types of engineered feature. Random forests, gradient-boosting machines, or other tree-based models might not see the same accuracy gain that an engineered feature allowed neural networks, generalized linear models, or other dot-product based models to achieve on the same data set. This dissertation presents a genetic programming-based algorithm that automatically engineers features that increase the accuracy of deep neural networks for some data sets. For a genetic programming algorithm to be effective, it must prioritize the search space and efficiently evaluate what it finds. This dissertation algorithm faced a potential search space composed of all possible mathematical combinations of the original feature vector. Five experiments were designed to guide the search process to efficiently evolve good engineered features. The result of this dissertation is an automated feature engineering (AFE) algorithm that is computationally efficient, even though a neural network is used to evaluate each candidate feature. This approach gave the algorithm a greater opportunity to specifically target deep neural networks in its search for engineered features that improve accuracy. Finally, a sixth experiment empirically demonstrated the degree to which this algorithm improved the accuracy of neural networks on data sets augmented by the algorithm’s engineered features.
336

Enhanced Contour Description for People Detection in Images

Du, Xiaoyun January 2014 (has links)
People detection has been an attractive technology in computer vision. There are many useful applications in our daily life, for instance, intelligent surveillance and driver assistance system. People detection is a challenging matter as people adopt a wide range of poses, wear diverse clothes, and are visible in different kind of backgrounds with significant changes in illumination. In this thesis, some advanced techniques and powerful tools are presented in order to design a robust people detection system. First a baseline model is implemented by combining the Histogram of Oriented Gradients descriptor and linear Support Vector Machines. This baseline model obtains a good performance on the well-known INRIA dataset. Second an advanced model is proposed which has a two-layer cascade framework that achieves both accurate detection and lower computational complexity. For the first layer, the baseline model is used as a filter to generate several candidates. In this procedure, most positive samples survived and the majority of negative samples are rejected according to a preset threshold. The second layer uses a more discriminative model. We combine the Variational Local Binary Patterns descriptor, and the Histogram of Oriented Gradients descriptor as a new discriminative feature. Furthermore multi-scale feature descriptors are used to improve the discriminative power of the Variational Local Binary Patterns feature. Then we perform Feature Selection using the Feature Generating Machine in order to generate a concise descriptor based on this concatenated feature. Moreover Histogram Intersection Kernel Support Vector Machines is employed as an efficient tool of classification. The bootstrapping algorithm is used in the training procedure to exploit the information of the dataset. Finally our approach has a good performance on the INRIA dataset, with results superior to the baseline model.
337

Real-Time Localization of Planar Targets on Power-Constrained Devices

Akhoury, Sharat Saurabh January 2013 (has links)
In this thesis we present a method for detecting planar targets in real-time on power-constrained, or low-powered, hand-held devices such as mobile phones. We adopt the feature recognition (also referred to as feature matching) approach and employ fast-to-compute local feature descriptors to establish point correspondences. To obtain a satisfactory localization accuracy, most local feature descriptors seek a transformation of the input intensity patch that is invariant to various geometric and photometric deformations. Generally, such transformations are computationally intensive, hence are not ideal for real-time applications on limited hardware platforms. On the other hand, descriptors which are fast to compute are typically limited in their ability to provide invariance to a vast range of deformations. To address these shortcomings, we have developed a learning-based approach which can be applied to any local feature descriptor to increase the system’s robustness to both affine and perspective deformations. The motivation behind applying a learning-based approach is to transfer as much of the computational burden (as possible) onto an offline training phase, allowing a reduction in cost during online matching. The approach comprises of identifying keypoints which remain stable under artificially induced perspective transformations, extracting the corresponding feature vectors, and finally aggregating the feature vectors of coincident keypoints to obtain the final descriptors. We strictly focus on objects which are planar, thus allowing us to synthesize images of the object in order to capture the appearance of keypoint patches under several perspectives.
338

A Research Platform for Artificial Neural Networks with Applications in Pediatric Epilepsy

Ayala, Melvin 10 July 2009 (has links)
This dissertation established a state-of-the-art programming tool for designing and training artificial neural networks (ANNs) and showed its applicability to brain research. The developed tool, called NeuralStudio, allows users without programming skills to conduct studies based on ANNs in a powerful and very user friendly interface. A series of unique features has been implemented in NeuralStudio, such as ROC analysis, cross-validation, network averaging, topology optimization, and optimization of the activation function’s slopes. It also included a Support Vector Machines module for comparison purposes. Once the tool was fully developed, it was applied to two studies in brain research. In the first study, the goal was to create and train an ANN to detect epileptic seizures from subdural EEG. This analysis involved extracting features from the spectral power in the gamma frequencies. In the second application, a unique method was devised to link EEG recordings to epileptic and non-epileptic subjects. The contribution of this method consisted of developing a descriptor matrix that can be used to represent any EEG file regarding its duration and the number of electrodes. The first study showed that the inter-electrode mean of the spectral power in the gamma frequencies and its duration above a specific threshold performs better than the other frequencies in seizure detection, exhibiting an accuracy of 95.90%, a sensitivity of 92.59%, and a specificity of 96.84%. The second study yielded that Hjorth’s parameter activity is sufficient to accurately relate EEG to epileptic and non-epileptic subjects. After testing, accuracy, sensitivity and specificity of the classifier were all above 0.9667. Statistical tests measured the superiority of activity at over 99.99 % certainty. It was demonstrated that 1) the spectral power in the gamma frequencies is highly effective in locating seizures from EEG and 2) activity can be used to link EEG recordings to epileptic and non-epileptic subjects. These two studies required high computational load and could be addressed thanks to NeuralStudio. From a medical perspective, both methods proved the merits of NeuralStudio in brain research applications. For its outstanding features, NeuralStudio has been recently awarded a patent (US patent No. 7502763).
339

Embedded Feature Selection for Model-based Clustering

January 2020 (has links)
abstract: Model-based clustering is a sub-field of statistical modeling and machine learning. The mixture models use the probability to describe the degree of the data point belonging to the cluster, and the probability is updated iteratively during the clustering. While mixture models have demonstrated the superior performance in handling noisy data in many fields, there exist some challenges for high dimensional dataset. It is noted that among a large number of features, some may not indeed contribute to delineate the cluster profiles. The inclusion of these “noisy” features will confuse the model to identify the real structure of the clusters and cost more computational time. Recognizing the issue, in this dissertation, I propose a new feature selection algorithm for continuous dataset first and then extend to mixed datatype. Finally, I conduct uncertainty quantification for the feature selection results as the third topic. The first topic is an embedded feature selection algorithm termed Expectation-Selection-Maximization (ESM) model that can automatically select features while optimizing the parameters for Gaussian Mixture Model. I introduce a relevancy index (RI) revealing the contribution of the feature in the clustering process to assist feature selection. I demonstrate the efficacy of the ESM by studying two synthetic datasets, four benchmark datasets, and an Alzheimer’s Disease dataset. The second topic focuses on extending the application of ESM algorithm to handle mixed datatypes. The Gaussian mixture model is generalized to Generalized Model of Mixture (GMoM), which can not only handle continuous features, but also binary and nominal features. The last topic is about Uncertainty Quantification (UQ) of the feature selection. A new algorithm termed ESOM is proposed, which takes the variance information into consideration while conducting feature selection. Also, a set of outliers are generated in the feature selection process to infer the uncertainty in the input data. Finally, the selected features and detected outlier instances are evaluated by visualization comparison. / Dissertation/Thesis / Doctoral Dissertation Industrial Engineering 2020
340

A Moving-window penalization method and its applications

Bao, Minli 01 August 2017 (has links)
Genome-wide association studies (GWAS) has played an import role in identifying genetic variants underlying human complex traits. However, its success is hindered by weak effect at causal variants and noise at non-causal variants. Penalized regression can be applied to handle GWAS problems. GWAS data has some specificities. Consecutive genetic markers are usually highly correlated due to linkage disequilibrium. This thesis introduces a moving-window penalized method for GWAS which smooths the effects of consecutive SNPs. Simulation studies indicate that this penalized moving window method provides improved true positive findings. The practical utility of the proposed method is demonstrated by applying it to Genetic Analysis Workshop 16 Rheumatoid Arthritis data. Next, the moving-window penalty is applied on generalized linear model. We call such an approach as smoothed lasso (SLasso). Coordinate descent computing algorithms are proposed in details, for both quadratic and logistic loss. Asymptotic properties are discussed. Then based on SLasso, we discuss a two-stage method called MW-Ridge. Simulation results show that while SLasso can provide more true positive findings than Lasso, it has a side-effect that it includes more unrelated random noises. MW-Ridge can eliminate such a side-effect and result in high true positive rates and low false detective rates. The applicability to real data is illustrated by using GAW 16 Rheumatoid Arthritis data. The SLasso and MW-Ridge approaches are then generalized to multivariate response data. The multivariate response data can be transformed into univariate response data. The causal variants are not required to be the same for different response variables. We found that no matter how the causal variants are matched, being fully matched or 60% matched, MW-Ridge can always over perform Lasso by detecting all true positives with lower false detective rates.

Page generated in 0.0588 seconds