• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 225
  • 72
  • 24
  • 22
  • 18
  • 9
  • 9
  • 9
  • 6
  • 3
  • 2
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 461
  • 461
  • 461
  • 155
  • 128
  • 109
  • 105
  • 79
  • 76
  • 70
  • 67
  • 64
  • 60
  • 55
  • 52
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Detection and Classification of Sequence Variants for Diagnostic Evaluation of Genetic Disorders

Kothiyal, Prachi 05 August 2010 (has links)
No description available.
22

Supervised classification of bradykinesia in Parkinson’s disease from smartphone videos

Williams, S., Relton, S.D., Fang, H., Alty, J., Qahwaji, Rami S.R., Graham, C.D., Wong, D.C. 21 March 2021 (has links)
No / Background: Slowness of movement, known as bradykinesia, is the core clinical sign of Parkinson's and fundamental to its diagnosis. Clinicians commonly assess bradykinesia by making a visual judgement of the patient tapping finger and thumb together repetitively. However, inter-rater agreement of expert assessments has been shown to be only moderate, at best. Aim: We propose a low-cost, contactless system using smartphone videos to automatically determine the presence of bradykinesia. Methods: We collected 70 videos of finger-tap assessments in a clinical setting (40 Parkinson's hands, 30 control hands). Two clinical experts in Parkinson's, blinded to the diagnosis, evaluated the videos to give a grade of bradykinesia severity between 0 and 4 using the Unified Pakinson's Disease Rating Scale (UPDRS). We developed a computer vision approach that identifies regions related to hand motion and extracts clinically-relevant features. Dimensionality reduction was undertaken using principal component analysis before input to classification models (Naïve Bayes, Logistic Regression, Support Vector Machine) to predict no/slight bradykinesia (UPDRS = 0–1) or mild/moderate/severe bradykinesia (UPDRS = 2–4), and presence or absence of Parkinson's diagnosis. Results: A Support Vector Machine with radial basis function kernels predicted presence of mild/moderate/severe bradykinesia with an estimated test accuracy of 0.8. A Naïve Bayes model predicted the presence of Parkinson's disease with estimated test accuracy 0.67. Conclusion: The method described here presents an approach for predicting bradykinesia from videos of finger-tapping tests. The method is robust to lighting conditions and camera positioning. On a set of pilot data, accuracy of bradykinesia prediction is comparable to that recorded by blinded human experts.
23

Noninvasive assessment and classification of human skin burns using images of Caucasian and African patients

Abubakar, Aliyu, Ugail, Hassan, Bukar, Ali M. 20 March 2022 (has links)
Yes / Burns are one of the obnoxious injuries subjecting thousands to loss of life and physical defacement each year. Both high income and Third World countries face major evaluation challenges including but not limited to inadequate workforce, poor diagnostic facilities, inefficient diagnosis and high operational cost. As such, there is need to develop an automatic machine learning algorithm to noninvasively identify skin burns. This will operate with little or no human intervention, thereby acting as an affordable substitute to human expertise. We leverage the weights of pretrained deep neural networks for image description and, subsequently, the extracted image features are fed into the support vector machine for classification. To the best of our knowledge, this is the first study that investigates black African skins. Interestingly, the proposed algorithm achieves state-of-the-art classification accuracy on both Caucasian and African datasets.
24

The Development and Validation of a Neural Model of Affective States

McCurry, Katherine Lorraine 10 January 2016 (has links)
Emotion dysregulation plays a central role in psychopathology (B. Bradley et al., 2011) and has been linked to aberrant activation of neural circuitry involved in emotion regulation (Beauregard, Paquette, & Lévesque, 2006; Etkin & Schatzberg, 2011). In recent years, technological advances in neuroimaging methods coupled with developments in machine learning have allowed for the non-invasive measurement and prediction of brain states in real-time, which can be used to provide feedback to facilitate regulation of brain states (LaConte, 2011). Real-time functional magnetic resonance imaging (rt-fMRI)-guided neurofeedback, has promise as a novel therapeutic method in which individuals are provided with tailored feedback to improve regulation of emotional responses (Stoeckel et al., 2014). However, effective use of this technology for such purposes likely entails the development of (a) a normative model of emotion processing to provide feedback for individuals with emotion processing difficulties; and (b) best practices concerning how these types of group models are designed and translated for use in a rt-fMRI environment (Ruiz, Buyukturkoglu, Rana, Birbaumer, & Sitaram, 2014). To this end, the present study utilized fMRI data from a standard emotion elicitation paradigm to examine the impact of several design decisions made during the development of a whole-brain model of affective processing. Using support vector machine (SVM) learning, we developed a group model that reliably classified brain states associated with passive viewing of positive, negative, and neutral images. After validating the group whole-brain model, we adapted this model for use in an rt-fMRI experiment, and using a second imaging dataset along with our group model, we simulated rt-fMRI predictions and tested options for providing feedback. / Master of Science
25

Development of a Support-Vector-Machine-based Supervised Learning Algorithm for Land Cover Classification Using Polarimetric SAR Imagery

Black, James Noel 16 October 2018 (has links)
Land cover classification using Synthetic Aperture Radar (SAR) data has been a topic of great interest in recent literature. Food commodities output prediction through crop identification, environmental monitoring, and forest regrowth tracking are some of the many problems that can be aided by land cover classification methods. The need for fast and automated classification methods is apparent in a variety of applications involving vast amounts of SAR data. One fundamental step in any supervised learning classification algorithm is the selection and/or extraction of features present in the dataset to be used for class discrimination. A popular method that has been proposed for feature extraction from polarimetric data is to decompose the data into the underlying scattering mechanisms. In this research, the Freeman and Durden scattering model is applied to ALOS PALSAR fully polarimetric data for feature extraction. Efficient methods for solving the complex system of equations present in the scattering model are developed and compared. Using the features from the Freeman and Durden work, the classification capability of the model is assessed on amazon rainforest land cover types using a supervised Support Vector Machine (SVM) classification algorithm. The quantity of land cover types that can be discriminated using the model is also determined. Additionally, the performance of the median as a robust estimator in noisy environments for multi-pixel windowing is also characterized. / Master of Science / Land type classification using Radar data has been a topic of great interest in recent literature. Food commodities output prediction through crop identification, environmental monitoring, and forest regrowth tracking are some of the many problems that can be aided by land cover classification methods. The need for fast and automated classification methods is apparent in a variety of applications involving vast amounts of Radar data. One fundamental step in any classification algorithm is the selection and/or extraction of discriminating features present in the dataset to be used for class discrimination. A popular method that has been proposed for feature extraction from polarized Radar data is to decompose the data into the underlying scatter components. In this research, a scattering model is applied to real world data for feature extraction. Efficient methods for solving the complex system of equations present in the scattering model are developed and compared. Using the features from the scattering model, the classification capability of the model is assessed on amazon rainforest land types using a Support Vector Machine (SVM) classification algorithm. The quantity of land cover types that can be discriminated using the model is also determined and compared using different estimators.
26

Uma metodologia de projetos para circuitos com reconfiguração dinâmica de hardware aplicada a support vector machines. / A design methodology for circuits with dynamic reconfiguration of hardware applied to support vector machines.

Gonzalez, José Artur Quilici 07 November 2006 (has links)
Sistemas baseados em processadores de uso geral caracterizam-se pela flexibilidade a mudanças de projeto, porém com desempenho computacional abaixo daqueles baseados em circuitos dedicados otimizados. A implementação de algoritmos em dispositivos reconfiguráveis, conhecidos como Field Programmable Gate Arrays - FPGAs, oferece uma solução de compromisso entre a flexibilidade dos processadores e o desempenho dos circuitos dedicados, pois as FPGAs permitem que seus recursos de hardware sejam configurados por software, com uma granularidade menor que a do processador de uso geral e flexibilidade maior que a dos circuitos dedicados. As versões atuais de FPGAs apresentam um tempo de reconfiguração suficientemente pequeno para viabilizar sua reconfiguração dinâmica, i.e., mesmo com o dispositivo executando um algoritmo, a forma como seus recursos são dispostos pode ser alterada, oferecendo a possibilidade de particionar temporalmente um algoritmo. Novas linhas de FPGAs já são fabricadas com opção de reconfiguração dinâmica parcial, i.e., é possível reconfigurar áreas selecionadas de uma FPGA enquanto o restante continua em operação. No entanto, para que esta nova tecnologia se torne largamente difundida é necessário o desenvolvimento de uma metodologia própria, que ofereça soluções eficazes aos novos desdobramentos do projeto digital. Em particular, uma das principais dificuldades apresentadas por esta abordagem refere-se à maneira de particionar o algoritmo, de forma a minimizar o tempo necessário para completar sua tarefa. Este manuscrito oferece uma metodologia de projeto para dispositivos dinamicamente reconfiguráveis, com ênfase no problema do particionamento temporal de circuitos, tendo como aplicação alvo uma família de algoritmos, utilizados principalmente em Bioinformática, representada pelo classificador binário conhecido como Support Vector Machine. Algumas técnicas de particionamento para FPGA Dinamicamente Reconfigurável, especificamente aplicáveis ao particionamento de FSM, foram desenvolvidas para garantir que um projeto dominado por fluxo de controle seja mapeado numa única FPGA, sem alterar sua funcionalidade. / Systems based on general-purpose processors are characterized by a flexibility to design changes, although with a computational performance below those based on optimized dedicated circuits. The implementation of algorithms in reconfigurable devices, known as Field Programmable Gate Arrays, FPGAs, offers a solution with a trade-off between the processor\'s flexibility and the dedicated circuit\'s performance. With FPGAs it is possible to have their hardware resources configured by software, with a smaller granularity than that of the general-purpose processor and greater flexibility than that of dedicated circuits. Current versions of FPGAs present a reconfiguration time sufficiently small as to make feasible dynamic reconfiguration, i.e., even with the device executing an algorithm, the way its resources are displayed can be modified, offering the possibility of temporal partitioning of an algorithm. New lines of FPGAs are already being manufactured with the option of partial dynamic reconfiguration, i.e. it is possible to reconfigure selected areas of an FPGA anytime, while the remainder area continue in operation. However, in order for this new technology to become widely adopted the development of a proper methodology is necessary, which offers efficient solutions to the new stages of the digital project. In particular, one of the main difficulties presented by this approach is related to the way of partitioning the algorithm, in order to minimize the time necessary to complete its task. This manuscript offers a project methodology for dynamically reconfigurable devices, with an emphasis on the problem of the temporal partitioning of circuits, having as a target application a family of algorithms, used mainly in Bioinformatics, represented by the binary classifier known as Support Machine Vector. Some techniques of functional partitioning for Dynamically Reconfigurable FPGA, specifically applicable to partitioning of FSMs, were developed to guarantee that a control flow dominated design be mapped in only one FPGA, without modifying its functionality.
27

Uma metodologia de projetos para circuitos com reconfiguração dinâmica de hardware aplicada a support vector machines. / A design methodology for circuits with dynamic reconfiguration of hardware applied to support vector machines.

José Artur Quilici Gonzalez 07 November 2006 (has links)
Sistemas baseados em processadores de uso geral caracterizam-se pela flexibilidade a mudanças de projeto, porém com desempenho computacional abaixo daqueles baseados em circuitos dedicados otimizados. A implementação de algoritmos em dispositivos reconfiguráveis, conhecidos como Field Programmable Gate Arrays - FPGAs, oferece uma solução de compromisso entre a flexibilidade dos processadores e o desempenho dos circuitos dedicados, pois as FPGAs permitem que seus recursos de hardware sejam configurados por software, com uma granularidade menor que a do processador de uso geral e flexibilidade maior que a dos circuitos dedicados. As versões atuais de FPGAs apresentam um tempo de reconfiguração suficientemente pequeno para viabilizar sua reconfiguração dinâmica, i.e., mesmo com o dispositivo executando um algoritmo, a forma como seus recursos são dispostos pode ser alterada, oferecendo a possibilidade de particionar temporalmente um algoritmo. Novas linhas de FPGAs já são fabricadas com opção de reconfiguração dinâmica parcial, i.e., é possível reconfigurar áreas selecionadas de uma FPGA enquanto o restante continua em operação. No entanto, para que esta nova tecnologia se torne largamente difundida é necessário o desenvolvimento de uma metodologia própria, que ofereça soluções eficazes aos novos desdobramentos do projeto digital. Em particular, uma das principais dificuldades apresentadas por esta abordagem refere-se à maneira de particionar o algoritmo, de forma a minimizar o tempo necessário para completar sua tarefa. Este manuscrito oferece uma metodologia de projeto para dispositivos dinamicamente reconfiguráveis, com ênfase no problema do particionamento temporal de circuitos, tendo como aplicação alvo uma família de algoritmos, utilizados principalmente em Bioinformática, representada pelo classificador binário conhecido como Support Vector Machine. Algumas técnicas de particionamento para FPGA Dinamicamente Reconfigurável, especificamente aplicáveis ao particionamento de FSM, foram desenvolvidas para garantir que um projeto dominado por fluxo de controle seja mapeado numa única FPGA, sem alterar sua funcionalidade. / Systems based on general-purpose processors are characterized by a flexibility to design changes, although with a computational performance below those based on optimized dedicated circuits. The implementation of algorithms in reconfigurable devices, known as Field Programmable Gate Arrays, FPGAs, offers a solution with a trade-off between the processor\'s flexibility and the dedicated circuit\'s performance. With FPGAs it is possible to have their hardware resources configured by software, with a smaller granularity than that of the general-purpose processor and greater flexibility than that of dedicated circuits. Current versions of FPGAs present a reconfiguration time sufficiently small as to make feasible dynamic reconfiguration, i.e., even with the device executing an algorithm, the way its resources are displayed can be modified, offering the possibility of temporal partitioning of an algorithm. New lines of FPGAs are already being manufactured with the option of partial dynamic reconfiguration, i.e. it is possible to reconfigure selected areas of an FPGA anytime, while the remainder area continue in operation. However, in order for this new technology to become widely adopted the development of a proper methodology is necessary, which offers efficient solutions to the new stages of the digital project. In particular, one of the main difficulties presented by this approach is related to the way of partitioning the algorithm, in order to minimize the time necessary to complete its task. This manuscript offers a project methodology for dynamically reconfigurable devices, with an emphasis on the problem of the temporal partitioning of circuits, having as a target application a family of algorithms, used mainly in Bioinformatics, represented by the binary classifier known as Support Machine Vector. Some techniques of functional partitioning for Dynamically Reconfigurable FPGA, specifically applicable to partitioning of FSMs, were developed to guarantee that a control flow dominated design be mapped in only one FPGA, without modifying its functionality.
28

Investigation of integrated waterlevel sensor solution forsubmersible pumps : A study of how sensors can be combined towithstand build-up materials and improvereliability in harsh environment / Undersökning av integrerad vattennivåsensorlösningför dränkbar pump

Abelin, Sarah January 2017 (has links)
Monitoring water level in harsh environment in order to handle the start and stop function of drainage pumps has been a major issue. Several environmental factors are present, which affect and disturb sensor measurements. Current solutions with mechanical float switches, mounted outside of pumps, wear out, get entangled and account for more than half of all the emergency call outs to pumping stations. Since pumps are frequently moved around, a new sensor solution is needed which can be integrated within the pump house and is able to continuously monitor water level to optimize the operation of the pump and to decrease wear, cost and energy consumption. This thesis presents an investigation how different sensor techniques can be combined to improve reliability for monitoring water level and handle the start and stop function of drainage pumps in harsh environment. The main focus has been to identify suitable water level sensing techniques and to investigate how sensors are affected by build-up materials building up on the pump surface and covering the sensor probes. A support vector machine algorithm is implemented to fuse sensor data in order to increase reliability of the sensor solution in contaminated condition. Results show that a combination of a pressure sensor and a capacitive sensor is the most suitable combination for withstanding build-up materials. For operating conditions when sensors are covered with soft or viscous build-ups, sensors were able to monitor water level through the build-up materials. No solution was found that could satisfactorily monitor water level through solidified build-up materials. / Att övervaka vattennivån i extrema miljöer för att hantera start- och stoppfunktion av dräneringspumpar har varit ett stort problem. Flera påverkande faktorer från pumpomgivningen influerar och stör sensormätningarna. Nuvarande lösningar med mekaniskt rörliga nivåvippor som är monterade utanför pumparna slits ut, trasslar in sig och står för mer än hälften av alla jourutryckningar till pumpstationerna. Eftersom pumpar ofta flyttas runt, behövs en ny sensorlösning som kan integreras i pumpen och som kontinuerligt kan övervaka vattennivån för att optimera pumpdriften och minska  slitage, kostnad och energiförbrukning. Den här masteruppsatsen presenterar en undersökning av hur olika givartekniker kan kombineras för att förbättra tillförlitligheten för övervakning av vattennivån och hantera start- och stoppfunktionen av dräneringspumpar i extrema miljöer. Fokus har legat på att identifiera lämpliga givartekniker för att mäta vattennivå och undersöka hur givare påverkas av beläggningar som byggs upp på pumpytan och täcker givarna. En support vector machine algoritm har implementerats för att kombinera givardata i syfte att öka tillförlitligheten hos givarlösningen i kontaminerat skick. Resultaten visar att en kombination av en tryckgivare och en kapacitiv givare är den mest lämpliga kombinationen för att motstå beläggningsmaterial. För driftsförhållanden när givarna är täckta med mjuka beläggningar kunde givarna mäta vattennivån genom beläggningarna. Ingen lösning identifierades som på ett tillfredsställande sätt kunde mäta vattennivå genom stelnade, solida beläggningsmaterial.
29

Uma comparação da aplicação de métodos computacionais de classificação de dados aplicados ao consumo de cinema no Brasil / A comparison of the application of data classification computational methods to the consumption of film at theaters in Brazil

Nieuwenhoff, Nathalia 13 April 2017 (has links)
As técnicas computacionais de aprendizagem de máquina para classificação ou categorização de dados estão sendo cada vez mais utilizadas no contexto de extração de informações ou padrões em bases de dados volumosas em variadas áreas de aplicação. Em paralelo, a aplicação destes métodos computacionais para identificação de padrões, bem como a classificação de dados relacionados ao consumo dos bens de informação é considerada uma tarefa complexa, visto que tais padrões de decisão do consumo estão relacionados com as preferências dos indivíduos e dependem de uma composição de características individuais, variáveis culturais, econômicas e sociais segregadas e agrupadas, além de ser um tópico pouco explorado no mercado brasileiro. Neste contexto, este trabalho realizou o estudo experimental a partir da aplicação do processo de Descoberta do conhecimento (KDD), o que inclui as etapas de seleção e Mineração de Dados, para um problema de classificação binária, indivíduos brasileiros que consomem e não consomem um bem de informação, filmes em salas de cinema, a partir dos dados obtidos na Pesquisa de Orçamento Familiar (POF) 2008-2009, pelo Instituto Brasileiro de Geografia e Estatística (IBGE). O estudo experimental resultou em uma análise comparativa da aplicação de duas técnicas de aprendizagem de máquina para classificação de dados, baseadas em aprendizado supervisionado, sendo estas Naïve Bayes (NB) e Support Vector Machine (SVM). Inicialmente, a revisão sistemática realizada com o objetivo de identificar estudos relacionados a aplicação de técnicas computacionais de aprendizado de máquina para classificação e identificação de padrões de consumo indica que a utilização destas técnicas neste contexto não é um tópico de pesquisa maduro e desenvolvido, visto que não foi abordado em nenhum dos trabalhos estudados. Os resultados obtidos a partir da análise comparativa realizada entre os algoritmos sugerem que a escolha dos algoritmos de aprendizagem de máquina para Classificação de Dados está diretamente relacionada a fatores como: (i) importância das classes para o problema a ser estudado; (ii) balanceamento entre as classes; (iii) universo de atributos a serem considerados em relação a quantidade e grau de importância destes para o classificador. Adicionalmente, os atributos selecionados pelo algoritmo de seleção de variáveis Information Gain sugerem que a decisão de consumo de cultura, mais especificamente do bem de informação, filmes em cinema, está fortemente relacionada a aspectos dos indivíduos relacionados a renda, nível de educação, bem como suas preferências por bens culturais / Machine learning techniques for data classification or categorization are increasingly being used for extracting information or patterns from volumous databases in various application areas. Simultaneously, the application of these computational methods to identify patterns, as well as data classification related to the consumption of information goods is considered a complex task, since such decision consumption paterns are related to the preferences of individuals and depend on a composition of individual characteristics, cultural, economic and social variables segregated and grouped, as well as being not a topic explored in the Brazilian market. In this context, this study performed an experimental study of application of the Knowledge Discovery (KDD) process, which includes data selection and data mining steps, for a binary classification problem, Brazilian individuals who consume and do not consume a information good, film at theaters in Brazil, from the microdata obtained from the Brazilian Household Budget Survey (POF), 2008-2009, performed by the Brazilian Institute of Geography and Statistics (IBGE). The experimental study resulted in a comparative analysis of the application of two machine-learning techniques for data classification, based on supervised learning, such as Naïve Bayes (NB) and Support Vector Machine (SVM). Initially, a systematic review with the objective of identifying studies related to the application of computational techniques of machine learning to classification and identification of consumption patterns indicates that the use of these techniques in this context is not a mature and developed research topic, since was not studied in any of the papers analyzed. The results obtained from the comparative analysis performed between the algorithms suggest that the choice of the machine learning algorithms for data classification is directly related to factors such as: (i) importance of the classes for the problem to be studied; (ii) balancing between classes; (iii) universe of attributes to be considered in relation to the quantity and degree of importance of these to the classifiers. In addition, the attributes selected by the Information Gain variable selection algorithm suggest that the decision to consume culture, more specifically information good, film at theaters, is directly related to aspects of individuals regarding income, educational level, as well as preferences for cultural goods
30

Utilização do algoritmo de aprendizado de máquinas para monitoramento de falhas em estruturas inteligentes / Use of the learning algorithm of machines for the monitoring of faults in intelligent structures

Guimarães, Ana Paula Alves [UNESP] 20 December 2016 (has links)
Submitted by ANA PAULA ALVES GUIMARÃES null (annapaulasun@gmail.com) on 2017-02-04T20:28:04Z No. of bitstreams: 1 dissertação-final.pdf: 4630588 bytes, checksum: 8c2806b890a1b7889d8d26b4a11e97bf (MD5) / Approved for entry into archive by LUIZA DE MENEZES ROMANETTO (luizamenezes@reitoria.unesp.br) on 2017-02-07T13:18:18Z (GMT) No. of bitstreams: 1 guimaraes_apa_me_ilha.pdf: 4630588 bytes, checksum: 8c2806b890a1b7889d8d26b4a11e97bf (MD5) / Made available in DSpace on 2017-02-07T13:18:18Z (GMT). No. of bitstreams: 1 guimaraes_apa_me_ilha.pdf: 4630588 bytes, checksum: 8c2806b890a1b7889d8d26b4a11e97bf (MD5) Previous issue date: 2016-12-20 / O monitoramento da condição estrutural é uma área que vem sendo bastante estudada por permitir a construção de sistemas que possuem a capacidade de identificar um determinado dano em seu estágio inicial, podendo assim evitar sérios prejuízos futuros. O ideal seria que estes sistemas tivessem o mínimo de interferência humana. Sistemas que abordam o conceito de aprendizagem têm a capacidade de serem autômatos. Acredita-se que por possuírem estas propriedades, os algoritmos de aprendizagem de máquina sejam uma excelente opção para realizar as etapas de identificação, localização e avaliação de um dano, com capacidade de obter resultados extremamente precisos e com taxas mínimas de erros. Este trabalho tem como foco principal utilizar o algoritmo support vector machine no auxílio do monitoramento da condição de estruturas e, com isto, obter melhor exatidão na identificação da presença ou ausência do dano, diminuindo as taxas de erros através das abordagens da aprendizagem de máquina, possibilitando, assim, um monitoramento inteligente e eficiente. Foi utilizada a biblioteca LibSVM para análise e validação da proposta. Desta forma, foi possível realizar o treinamento e classificação dos dados promovendo a identificação dos danos e posteriormente, empregando as predições efetuadas pelo algoritmo, foi possível determinar a localização dos danos na estrutura. Os resultados de identificação e localização dos danos foram bastante satisfatórios. / Structural health monitoring (SHM) is an area that has been extensively studied for allowing the construction of systems that have the ability to identify damages at an early stage, thus being able to avoid serious future losses. Ideally, these systems have the minimum of human interference. Systems that address the concept of learning have the ability to be autonomous. It is believed that by having these properties, the machine learning algorithms are an excellent choice to perform the steps of identifying, locating and assessing damage with ability to obtain highly accurate results with minimum error rates. This work is mainly focused on using support vector machine algorithm for monitoring structural condition and, thus, get better accuracy in identifying the presence or absence of damage, reducing error rates through the approaches of machine learning. It allows an intelligent and efficient monitoring system. LIBSVM library was used for analysing and validation of the proposed approach. Thus, it was feasible to conduct training and classification of data promoting the identification of damages. It was also possible to locate the damages in the structure. The results of identification and location of the damage was quite satisfactory.

Page generated in 0.0589 seconds