• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 200
  • 70
  • 23
  • 22
  • 21
  • 8
  • 5
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 1
  • Tagged with
  • 442
  • 442
  • 442
  • 177
  • 145
  • 99
  • 86
  • 73
  • 72
  • 58
  • 55
  • 55
  • 54
  • 49
  • 48
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
211

Protein Tertiary Model Assessment Using Granular Machine Learning Techniques

Chida, Anjum A 21 March 2012 (has links)
The automatic prediction of protein three dimensional structures from its amino acid sequence has become one of the most important and researched fields in bioinformatics. As models are not experimental structures determined with known accuracy but rather with prediction it’s vital to determine estimates of models quality. We attempt to solve this problem using machine learning techniques and information from both the sequence and structure of the protein. The goal is to generate a machine that understands structures from PDB and when given a new model, predicts whether it belongs to the same class as the PDB structures (correct or incorrect protein models). Different subsets of PDB (protein data bank) are considered for evaluating the prediction potential of the machine learning methods. Here we show two such machines, one using SVM (support vector machines) and another using fuzzy decision trees (FDT). First using a preliminary encoding style SVM could get around 70% in protein model quality assessment accuracy, and improved Fuzzy Decision Tree (IFDT) could reach above 80% accuracy. For the purpose of reducing computational overhead multiprocessor environment and basic feature selection method is used in machine learning algorithm using SVM. Next an enhanced scheme is introduced using new encoding style. In the new style, information like amino acid substitution matrix, polarity, secondary structure information and relative distance between alpha carbon atoms etc is collected through spatial traversing of the 3D structure to form training vectors. This guarantees that the properties of alpha carbon atoms that are close together in 3D space and thus interacting are used in vector formation. With the use of fuzzy decision tree, we obtained a training accuracy around 90%. There is significant improvement compared to previous encoding technique in prediction accuracy and execution time. This outcome motivates to continue to explore effective machine learning algorithms for accurate protein model quality assessment. Finally these machines are tested using CASP8 and CASP9 templates and compared with other CASP competitors, with promising results. We further discuss the importance of model quality assessment and other information from proteins that could be considered for the same.
212

Noise Reduction In Time-frequency Domain

Kalyoncu, Ozden 01 September 2007 (has links) (PDF)
In this thesis work, time-frequency filtering of nonstationary signals in noise using Wigner-Ville Distribution is investigated. Continuous-time, discrete-time and discrete Wigner Ville Distribution definitions, their relations, and properties are given. Time-Frequency Peak Filtering Method is presented. The effects of different parameters on the performance of the method are investigated, and the results are presented. Time-Varying Wiener Filter is presented. Using simulations it is shown that the performance of the filter is good at SNR levels down to -5 dB. It is proposed and shown that the performance of the filter improves by using Support Vector Machines. The presented time-frequency filtering techniques are applied on test signals and on a real world signal. The results obtained by the two methods and also by classical zero-phase low-pass filtering are compared. It is observed that for low sampling rates Time-Varying Wiener Filter, and for high sampling rates Time-Frequency Peak Filter performs better.
213

Approaches For Automatic Urban Building Extraction And Updating From High Resolution Satellite Imagery

Koc San, Dilek 01 March 2009 (has links) (PDF)
Approaches were developed for building extraction and updating from high resolution satellite imagery. The developed approaches include two main stages: (i) detecting the building patches and (ii) delineating the building boundaries. The building patches are detected from high resolution satellite imagery using the Support Vector Machines (SVM) classification, which is performed for both the building extraction and updating approaches. In the building extraction part of the study, the previously detected building patches are delineated using the Hough transform and boundary tracing based techniques. In the Hough transform based technique, the boundary delineation is carried out using the processing operations of edge detection, Hough transformation, and perceptual grouping. In the boundary tracing based technique, the detected edges are vectorized using the boundary tracing algorithm. The results are then refined through line simplification and vector filters. In the building updating part of the study, the destroyed buildings are determined through analyzing the existing building boundaries and the previously detected building patches. The new buildings are delineated using the developed model based approach, in which the building models are selected from an existing building database by utilizing the shape parameters. The developed approaches were tested in the Batikent district of Ankara, Turkey, using the IKONOS panchromatic and pan-sharpened stereo images (2002) and existing vector database (1999). The results indicate that the proposed approaches are quite satisfactory with the accuracies computed in the range from 68.60% to 98.26% for building extraction, and from 82.44% to 88.95% for building updating.
214

Estimation and prediction of travel time from loop detector data for intelligent transportation systems applications

Vanajakshi, Lelitha Devi 01 November 2005 (has links)
With the advent of Advanced Traveler Information Systems (ATIS), short-term travel time prediction is becoming increasingly important. Travel time can be obtained directly from instrumented test vehicles, license plate matching, probe vehicles etc., or from indirect methods such as loop detectors. Because of their wide spread deployment, travel time estimation from loop detector data is one of the most widely used methods. However, the major criticism about loop detector data is the high probability of error due to the prevalence of equipment malfunctions. This dissertation presents methodologies for estimating and predicting travel time from the loop detector data after correcting for errors. The methodology is a multi-stage process, and includes the correction of data, estimation of travel time and prediction of travel time, and each stage involves the judicious use of suitable techniques. The various techniques selected for each of these stages are detailed below. The test sites are from the freeways in San Antonio, Texas, which are equipped with dual inductance loop detectors and AVI. ?? Constrained non-linear optimization approach by Generalized Reduced Gradient (GRG) method for data reduction and quality control, which included a check for the accuracy of data from a series of detectors for conservation of vehicles, in addition to the commonly adopted checks. ?? A theoretical model based on traffic flow theory for travel time estimation for both off-peak and peak traffic conditions using flow, occupancy and speed values obtained from detectors. ?? Application of a recently developed technique called Support Vector Machines (SVM) for travel time prediction. An Artificial Neural Network (ANN) method is also developed for comparison. Thus, a complete system for the estimation and prediction of travel time from loop detector data is detailed in this dissertation. Simulated data from CORSIM simulation software is used for the validation of the results.
215

Support vector classification analysis of resting state functional connectivity fMRI

Craddock, Richard Cameron 17 November 2009 (has links)
Since its discovery in 1995 resting state functional connectivity derived from functional MRI data has become a popular neuroimaging method for study psychiatric disorders. Current methods for analyzing resting state functional connectivity in disease involve thousands of univariate tests, and the specification of regions of interests to employ in the analysis. There are several drawbacks to these methods. First the mass univariate tests employed are insensitive to the information present in distributed networks of functional connectivity. Second, the null hypothesis testing employed to select functional connectivity dierences between groups does not evaluate the predictive power of identified functional connectivities. Third, the specification of regions of interests is confounded by experimentor bias in terms of which regions should be modeled and experimental error in terms of the size and location of these regions of interests. The objective of this dissertation is to improve the methods for functional connectivity analysis using multivariate predictive modeling, feature selection, and whole brain parcellation. A method of applying Support vector classification (SVC) to resting state functional connectivity data was developed in the context of a neuroimaging study of depression. The interpretability of the obtained classifier was optimized using feature selection techniques that incorporate reliability information. The problem of selecting regions of interests for whole brain functional connectivity analysis was addressed by clustering whole brain functional connectivity data to parcellate the brain into contiguous functionally homogenous regions. This newly developed famework was applied to derive a classifier capable of correctly seperating the functional connectivity patterns of patients with depression from those of healthy controls 90% of the time. The features most relevant to the obtain classifier match those previously identified in previous studies, but also include several regions not previously implicated in the functional networks underlying depression.
216

Automated Prediction of CMEs Using Machine Learning of CME¿¿¿Flare Associations

Qahwaji, Rami S. R., Colak, Tufan, Al-Omari, M., Ipson, Stanley S. 02 June 2008 (has links)
Machine-learning algorithms are applied to explore the relation between significant flares and their associated CMEs. The NGDC flares catalogue and the SOHO/LASCO CME catalogue are processed to associate X and M-class flares with CMEs based on timing information. Automated systems are created to process and associate years of flare and CME data, which are later arranged in numerical-training vectors and fed to machine-learning algorithms to extract the embedded knowledge and provide learning rules that can be used for the automated prediction of CMEs. Properties representing the intensity, flare duration, and duration of decline and duration of growth are extracted from all the associated (A) and not-associated (NA) flares and converted to a numerical format that is suitable for machine-learning use. The machine-learning algorithms Cascade Correlation Neural Networks (CCNN) and Support Vector Machines (SVM) are used and compared in our work. The machine-learning systems predict, from the input of a flare¿s properties, if the flare is likely to initiate a CME. Intensive experiments using Jack-knife techniques are carried out and the relationships between flare properties and CMEs are investigated using the results. The predictive performance of SVM and CCNN is analysed and recommendations for enhancing the performance are provided. / EPSRC
217

Soft margin estimation for automatic speech recognition

Li, Jinyu 27 August 2008 (has links)
In this study, a new discriminative learning framework, called soft margin estimation (SME), is proposed for estimating the parameters of continuous density hidden Markov models (HMMs). The proposed method makes direct use of the successful ideas of margin in support vector machines to improve generalization capability and decision feedback learning in discriminative training to enhance model separation in classifier design. SME directly maximizes the separation of competing models to enhance the testing samples to approach a correct decision if the deviation from training samples is within a safe margin. Frame and utterance selections are integrated into a unified framework to select the training utterances and frames critical for discriminating competing models. SME offers a flexible and rigorous framework to facilitate the incorporation of new margin-based optimization criteria into HMMs training. The choice of various loss functions is illustrated and different kinds of separation measures are defined under a unified SME framework. SME is also shown to be able to jointly optimize feature extraction and HMMs. Both the generalized probabilistic descent algorithm and the Extended Baum-Welch algorithm are applied to solve SME. SME has demonstrated its great advantage over other discriminative training methods in several speech recognition tasks. Tested on the TIDIGITS digit recognition task, the proposed SME approach achieves a string accuracy of 99.61%, the best result ever reported in literature. On the 5k-word Wall Street Journal task, SME reduced the word error rate (WER) from 5.06% of MLE models to 3.81%, with relative 25% WER reduction. This is the first attempt to show the effectiveness of margin-based acoustic modeling for large vocabulary continuous speech recognition in a HMMs framework. The generalization of SME was also well demonstrated on the Aurora 2 robust speech recognition task, with around 30% relative WER reduction from the clean-trained baseline.
218

Sensing and Knowledge Mining for Structural Health Management

January 2011 (has links)
abstract: Current economic conditions necessitate the extension of service lives for a variety of aerospace systems. As a result, there is an increased need for structural health management (SHM) systems to increase safety, extend life, reduce maintenance costs, and minimize downtime, lowering life cycle costs for these aging systems. The implementation of such a system requires a collaborative research effort in a variety of areas such as novel sensing techniques, robust algorithms for damage interrogation, high fidelity probabilistic progressive damage models, and hybrid residual life estimation models. This dissertation focuses on the sensing and damage estimation aspects of this multidisciplinary topic for application in metallic and composite material systems. The primary means of interrogating a structure in this work is through the use of Lamb wave propagation which works well for the thin structures used in aerospace applications. Piezoelectric transducers (PZTs) were selected for this application since they can be used as both sensors and actuators of guided waves. Placement of these transducers is an important issue in wave based approaches as Lamb waves are sensitive to changes in material properties, geometry, and boundary conditions which may obscure the presence of damage if they are not taken into account during sensor placement. The placement scheme proposed in this dissertation arranges piezoelectric transducers in a pitch-catch mode so the entire structure can be covered using a minimum number of sensors. The stress distribution of the structure is also considered so PZTs are placed in regions where they do not fail before the host structure. In order to process the data from these transducers, advanced signal processing techniques are employed to detect the presence of damage in complex structures. To provide a better estimate of the damage for accurate life estimation, machine learning techniques are used to classify the type of damage in the structure. A data structure analysis approach is used to reduce the amount of data collected and increase computational efficiency. In the case of low velocity impact damage, fiber Bragg grating (FBG) sensors were used with a nonlinear regression tool to reconstruct the loading at the impact site. / Dissertation/Thesis / Ph.D. Aerospace Engineering 2011
219

Uma abordagem multinível usando algoritmos genéticos em um comitê de LS-SVM

Padilha, Carlos Alberto de Araújo January 2018 (has links)
Há muitos anos, os sistemas de comitê já tem se mostrado um método eficiente para aumentar a acurácia e estabilidade de algoritmos de aprendizado nas décadas recentes, embora sua construção tem uma questão para ser elucidada: diversidade. O desacordo entre os modelos que compõe o comitê pode ser gerado quando eles são contruídos sob diferentes circunstâncias, tais como conjunto de dados de treinamento, configuração dos parâmetros e a seleção dos algoritmos de aprendizado. O ensemble pode ser visto como uma estrutura com três níveis: espaço de entrada, a base de componentes e o bloco de combinação das respostas dos componentes. Neste trabalho é proposta uma abordagem multi-nível usando Algoritmos Genéticos para construir um ensemble de Máquinas de Vetor de Suporte por Mínimos Quadrados ou LS-SVM, realizando uma seleção de atributos no espaço de entrada, parametrização e a escolha de quais modelos irão compor o comitê no nível de componentes e a busca por um vetor de pesos que melhor represente a importância de cada classificador na resposta final do comitê. De forma a avaliar a performance da abordagem proposta, foram utilizados alguns benchmarks do repositório da UCI para comparar com outros algoritmos de classificação. Além disso, também foram comparados os resultados da abordagem proposta com métodos de aprendizagem profunda nas bases de dados MNIST e CIFAR e se mostraram bastante satisfatórios. / Many years ago, the ensemble systems have been shown to be an efficient method to increase the accuracy and stability of learning algorithms in recent decades, although its construction has a question to be elucidated: diversity. The disagreement among the models that compose the ensemble can be generated when they are built under different circumstances, such as training dataset, parameter setting and selection of learning algorithms. The ensemble may be viewed as a structure with three levels: input space, the base components and the combining block of the components responses. In this work is proposed a multi-level approach using genetic algorithms to build the ensemble of Least Squares Support Vector Machines (LS-SVM), performing a feature selection in the input space, the parameterization and the choice of which models will compose the ensemble at the component level and finding a weight vector which best represents the importance of each classifier in the final response of the ensemble. In order to evaluate the performance of the proposed approach, some benchmarks from UCI Repository have been used to compare with other classification algorithms. Also, the results obtained by our approach were compared with some deep learning methods on the datasets MNIST and CIFAR and proved very satisfactory.
220

Uma abordagem multinível usando algoritmos genéticos em um comitê de LS-SVM

Padilha, Carlos Alberto de Araújo January 2018 (has links)
Há muitos anos, os sistemas de comitê já tem se mostrado um método eficiente para aumentar a acurácia e estabilidade de algoritmos de aprendizado nas décadas recentes, embora sua construção tem uma questão para ser elucidada: diversidade. O desacordo entre os modelos que compõe o comitê pode ser gerado quando eles são contruídos sob diferentes circunstâncias, tais como conjunto de dados de treinamento, configuração dos parâmetros e a seleção dos algoritmos de aprendizado. O ensemble pode ser visto como uma estrutura com três níveis: espaço de entrada, a base de componentes e o bloco de combinação das respostas dos componentes. Neste trabalho é proposta uma abordagem multi-nível usando Algoritmos Genéticos para construir um ensemble de Máquinas de Vetor de Suporte por Mínimos Quadrados ou LS-SVM, realizando uma seleção de atributos no espaço de entrada, parametrização e a escolha de quais modelos irão compor o comitê no nível de componentes e a busca por um vetor de pesos que melhor represente a importância de cada classificador na resposta final do comitê. De forma a avaliar a performance da abordagem proposta, foram utilizados alguns benchmarks do repositório da UCI para comparar com outros algoritmos de classificação. Além disso, também foram comparados os resultados da abordagem proposta com métodos de aprendizagem profunda nas bases de dados MNIST e CIFAR e se mostraram bastante satisfatórios. / Many years ago, the ensemble systems have been shown to be an efficient method to increase the accuracy and stability of learning algorithms in recent decades, although its construction has a question to be elucidated: diversity. The disagreement among the models that compose the ensemble can be generated when they are built under different circumstances, such as training dataset, parameter setting and selection of learning algorithms. The ensemble may be viewed as a structure with three levels: input space, the base components and the combining block of the components responses. In this work is proposed a multi-level approach using genetic algorithms to build the ensemble of Least Squares Support Vector Machines (LS-SVM), performing a feature selection in the input space, the parameterization and the choice of which models will compose the ensemble at the component level and finding a weight vector which best represents the importance of each classifier in the final response of the ensemble. In order to evaluate the performance of the proposed approach, some benchmarks from UCI Repository have been used to compare with other classification algorithms. Also, the results obtained by our approach were compared with some deep learning methods on the datasets MNIST and CIFAR and proved very satisfactory.

Page generated in 0.0629 seconds