• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 184
  • 69
  • 32
  • 15
  • 8
  • 5
  • 4
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • Tagged with
  • 437
  • 437
  • 124
  • 117
  • 116
  • 97
  • 96
  • 94
  • 87
  • 77
  • 71
  • 69
  • 64
  • 63
  • 55
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
321

Arquitetura robótica inspirada na análise do comportamento / Robotic architecture inpired from Behavior analysis

Cláudio Adriano Policastro 24 October 2008 (has links)
Robôs sociáveis devem ser capazes de interagir, se comunicar, compreender e se relacionar com os seres humanos de uma forma natural. Existem diversas motivações práticas e científicas para o desenvolvimento de robôs sociáveis como plataforma de pesquisas, educação e entretenimento. Entretanto, embora diversos robôs sociáveis já tenham sido desenvolvidos com sucesso, ainda existe muito trabalho para aprimorar a sua eficácia. A utilização de uma arquitetura robótica pode reduzir fortemente o esforço requerido para o desenvolvimento de um robô sociável. Tal arquitetura robótica deve possuir estruturas e mecanismos para permitir a interação social, o controle do comportamento e o aprendizagem a partir do ambiente. Tal arquitetura deve ainda possuir estruturas e mecanismos para permitir a percepção e a atenção, a fim de permitir que um robô sociável perceba a riqueza do comportamento humano e do meio ambiente, e para aprender a partir de interações sociais. Os processos de aprendizado evidenciados na Análise do Comportamento podem levar ao desenvolvimento de métodos e estruturas promissoras para a construção de robôs sociáveis capazes de aprender por meio da interação com o meio ambiente e de exibir comportamento social apropriado. O proposito deste trabalho é o desenvolvimento de uma arquitetura robótica inspirada na Análise do Comportamento. A arquitetura desenvolvida é capaz de simular o aprendizado do comportamento operante e os métodos e estruturas propostos permitem o controlo e a exibição de comportamentos sociais apropriados e o aprendizado a partir da interação com o meio ambiente. A arquitetura proposta foi avaliada no contexto de um problema real não trivial: o aprendizado da atenção compartilhada. Os resultados obtidos mostram que a arquitetura é capaz de exibir comportamentos apropriados durante uma interação social real e controlada. Ainda, os resultados mostram também que a arquitetura pode aprender a partir de uma interação social. Este trabalho é a base para o desenvolvimento de uma ferramenta para a construção dos robôs sociáveis. Os resultados obtidos abrem muitas oportunidades de trabalhos futuros / Sociable robots should be able to interact, to communicate, to understand and to relate with human beings in a natural way. There are several scientific and practical motivations for developing sociable robots as platform of researches, education and entertainment. However, although several sociable robots have already been developed with success, much work remains to increase their effectiveness. The use of a robotic architecture may strongly reduce the time and effort required to construct a sociable robot. Such robotic architecture must have structures and mechanisms to allow social interaction, behavior control and learning from environment. Such architecture must also have structures and mechanisms to allow perception and attention, to enable a sociable robot to perceive the richness of the human behavior and of the environment, and to learn from social interactions. Learning processes evidenced on Behavior Analysis can led to the development of promising methods and structures for the construction social robots that are able to learn through interaction from the environment and to exhibit appropriate social behavior. The purpose of this work is the development of a robotic architecture inspired from Behavior Analysis. The developed architecture is able to simulate operant behavior learning and the proposed methods and structures allow the control and exhibition of appropriate social behavior and learning from interaction in the environment. The proposed architecture was evaluated in the context of a non trivial real problem: the learning of the shared attention. The obtained results show that the architecture is able to exhibit appropriate behaviors during a real and controlled social interaction. Additionally, the results show also that the architecture can learn from a social interaction. This work is the basis for developing a tool for the construction of social robots. The obtained results open oportunities of many future works
322

Ontologias e DSLs na geração de sistemas de apoio à decisão, caso de estudo SustenAgro / Ontologies and DSLs in the generation of decision support systems, SustenAgro study case

John Freddy Garavito Suarez 03 May 2017 (has links)
Os Sistemas de Apoio à Decisão (SAD) organizam e processam dados e informações para gerar resultados que apoiem a tomada de decisão em um domínio especifico. Eles integram conhecimento de especialistas de domínio em cada um de seus componentes: modelos, dados, operações matemáticas (que processam os dados) e resultado de análises. Nas metodologias de desenvolvimento tradicionais, esse conhecimento deve ser interpretado e usado por desenvolvedores de software para implementar os SADs. Isso porque especialistas de domínio não conseguem formalizar esse conhecimento em um modelo computável que possa ser integrado aos SADs. O processo de modelagem de conhecimento é realizado, na prática, pelos desenvolvedores, parcializando o conhecimento do domínio e dificultando o desenvolvimento ágil dos SADs (já que os especialistas não modificam o código diretamente). Para solucionar esse problema, propõe-se um método e ferramenta web que usa ontologias, na Web Ontology Language (OWL), para representar o conhecimento de especialistas, e uma Domain Specific Language (DSL), para modelar o comportamento dos SADs. Ontologias, em OWL, são uma representação de conhecimento computável, que permite definir SADs em um formato entendível e accessível a humanos e máquinas. Esse método foi usado para criar o Framework Decisioner para a instanciação de SADs. O Decisioner gera automaticamente SADs a partir de uma ontologia e uma descrição naDSL, incluindo a interface do SAD (usando uma biblioteca de Web Components). Um editor online de ontologias, que usa um formato simplificado, permite que especialistas de domínio possam modificar aspectos da ontologia e imediatamente ver as consequência de suasmudanças no SAD.Uma validação desse método foi realizada, por meio da instanciação do SAD SustenAgro no Framework Decisioner. O SAD SustenAgro avalia a sustentabilidade de sistemas produtivos de cana-de-açúcar na região centro-sul do Brasil. Avaliações, conduzidas por especialistas em sustentabilidade da Embrapa Meio ambiente (parceiros neste projeto), mostraram que especialistas são capazes de alterar a ontologia e DSL usadas, sem a ajuda de programadores, e que o sistema produz análises de sustentabilidade corretas. / Decision Support Systems (DSSs) organize and process data and information to generate results to support decision making in a specific domain. They integrate knowledge from domain experts in each of their components: models, data, mathematical operations (that process the data) and analysis results. In traditional development methodologies, this knowledge must be interpreted and used by software developers to implement DSSs. That is because domain experts cannot formalize this knowledge in a computable model that can be integrated into DSSs. The knowledge modeling process is carried out, in practice, by the developers, biasing domain knowledge and hindering the agile development of DSSs (as domain experts cannot modify code directly). To solve this problem, a method and web tool is proposed that uses ontologies, in the Web Ontology Language (OWL), to represent experts knowledge, and a Domain Specific Language (DSL), to model DSS behavior. Ontologies, in OWL, are a computable knowledge representations, which allow the definition of DSSs in a format understandable and accessible to humans and machines. This method was used to create the Decisioner Framework for the instantiation of DSSs. Decisioner automatically generates DSSs from an ontology and a description in its DSL, including the DSS interface (using a Web Components library). An online ontology editor, using a simplified format, allows that domain experts change the ontology and immediately see the consequences of their changes in the in the DSS. A validation of this method was done through the instantiation of the SustenAgro DSS, using the Decisioner Framework. The SustenAgro DSS evaluates the sustainability of sugarcane production systems in the center-south region of Brazil. Evaluations, done by by sustainability experts from Embrapa Environment (partners in this project), showed that domain experts are capable of changing the ontology and DSL program used, without the help of software developers, and that the system produced correct sustainability analysis.
323

Training Methodologies for Energy-Efficient, Low Latency Spiking Neural Networks

Nitin Rathi (11849999) 17 December 2021 (has links)
<div>Deep learning models have become the de-facto solution in various fields like computer vision, natural language processing, robotics, drug discovery, and many others. The skyrocketing performance and success of multi-layer neural networks comes at a significant power and energy cost. Thus, there is a need to rethink the current trajectory and explore different computing frameworks. One such option is spiking neural networks (SNNs) that is inspired from the spike-based processing observed in biological brains. SNNs operating with binary signals (or spikes), can potentially be an energy-efficient alternative to the power-hungry analog neural networks (ANNs) that operate on real-valued analog signals. The binary all-or-nothing spike-based communication in SNNs implemented on event-driven hardware offers a low-power alternative to ANNs. A spike is a Delta function with magnitude 1. With all its appeal for low power, training SNNs efficiently for high accuracy remains an active area of research. The existing ANN training methodologies when applied to SNNs, results in networks that have very high latency. Supervised training of SNNs with spikes is challenging (due to discontinuous gradients) and resource-intensive (time, compute, and memory).Thus, we propose compression methods, training methodologies, learning rules</div><div><br></div><div>First, we propose compression techniques for SNNs based on unsupervised spike timing dependent plasticity (STDP) model. We present a sparse SNN topology where non-critical connections are pruned to reduce the network size and the remaining critical synapses are weight quantized to accommodate for limited conductance levels in emerging in-memory computing hardware . Pruning is based on the power law weight-dependent</div><div>STDP model; synapses between pre- and post-neuron with high spike correlation are retained, whereas synapses with low correlation or uncorrelated spiking activity are pruned. The process of pruning non-critical connections and quantizing the weights of critical synapses is</div><div>performed at regular intervals during training.</div><div><br></div><div>Second, we propose a multimodal SNN that combines two modalities (image and audio). The two unimodal ensembles are connected with cross-modal connections and the entire network is trained with unsupervised learning. The network receives inputs in both modalities for the same class and</div><div>predicts the class label. The excitatory connections in the unimodal ensemble and the cross-modal connections are trained with STDP. The cross-modal connections capture the correlation between neurons of different modalities. The multimodal network learns features of both modalities and improves the classification accuracy compared to unimodal topology, even when one of the modality is distorted by noise. The cross-modal connections are only excitatory and do not inhibit the normal activity of the unimodal ensembles. </div><div><br></div><div>Third, we explore supervised learning methods for SNNs.Many works have shown that an SNN for inference can be formed by copying the weights from a trained ANN and setting the firing threshold for each layer as the maximum input received in that layer. These type of converted SNNs require a large number of time steps to achieve competitive accuracy which diminishes the energy savings. The number of time steps can be reduced by training SNNs with spike-based backpropagation from scratch, but that is computationally expensive and slow. To address these challenges, we present a computationally-efficient training technique for deep SNNs. We propose a hybrid training methodology:</div><div>1) take a converted SNN and use its weights and thresholds as an initialization step for spike-based backpropagation, and 2) perform incremental spike-timing dependent backpropagation (STDB) on this carefully initialized network to obtain an SNN that converges within few epochs and requires fewer time steps for input processing. STDB is performed with a novel surrogate gradient function defined using neuron’s spike time. The weight update is proportional to the difference in spike timing between the current time step and the most recent time step the neuron generated an output spike.</div><div><br></div><div>Fourth, we present techniques to further reduce the inference latency in SNNs. SNNs suffer from high inference latency, resulting from inefficient input encoding, and sub-optimal settings of the neuron parameters (firing threshold, and membrane leak). We propose DIET-SNN, a low-latency deep spiking network that is trained with gradient descent to optimize the membrane leak and the firing threshold along with other network parameters (weights). The membrane leak and threshold for each layer of the SNN are optimized with end-to-end backpropagation to achieve competitive accuracy at reduced latency. The analog pixel values of an image are directly applied to the input layer of DIET-SNN without the need to convert to spike-train. The first convolutional layer is trained to convert inputs into spikes where leaky-integrate-and-fire (LIF) neurons integrate the weighted inputs and generate an output spike when the membrane potential crosses the trained firing threshold. The trained membrane leak controls the flow of input information and attenuates irrelevant inputs to increase the activation sparsity in the convolutional and dense layers of the network. The reduced latency combined with high activation sparsity provides large improvements in computational efficiency.</div><div><br></div><div>Finally, we explore the application of SNNs in sequential learning tasks. We propose LITE-SNN, a lightweight SNN suitable for sequential learning tasks on data from dynamic vision sensors (DVS) and natural language processing (NLP). In general sequential data is processed with complex recurrent neural networks (like long short-term memory (LSTM), and gated recurrent unit (GRU)) with explicit feedback connections and internal states to handle the long-term dependencies. Whereas neuron models in SNNs - integrate-and-fire (IF) or leaky-integrate-and-fire (LIF) - have implicit feedback in their internal state (membrane potential) by design and can be leveraged for sequential tasks. The membrane potential in the IF/LIF neuron integrates the incoming current and outputs an event (or spike) when the potential crosses a threshold value. Since SNNs compute with highly sparse spike-based spatio-temporal data, the energy/inference is lower than LSTMs/GRUs. SNNs also have fewer parameters than LSTM/GRU resulting in smaller models and faster inference. We observe the problem of vanishing gradients in vanilla SNNs for longer sequences and implement a convolutional SNN with attention layers to perform sequence-to-sequence learning tasks. The inherent recurrence in SNNs, in addition to the fully parallelized convolutional operations, provides an additional mechanism to model sequential dependencies and leads to better accuracy than convolutional neural networks with ReLU activations.</div>
324

E-scooter Rider Detection System in Driving Environments

Kumar Apurv (11184732) 06 August 2021 (has links)
E-scooters are ubiquitous and their number keeps escalating, increasing their interactions with other vehicles on the road. E-scooter riders have an atypical behavior that varies enormously from other vulnerable road users, creating new challenges for vehicle active safety systems and automated driving functionalities. The detection of e-scooter riders by other vehicles is the first step in taking care of the risks. This research presents a novel vision-based system to differentiate between e-scooter riders and regular pedestrians and a benchmark dataset for e-scooter riders in natural environments. An efficient system pipeline built using two existing state-of-the-art convolutional neural networks (CNN), You Only Look Once (YOLOv3) and MobileNetV2, performs detection of these vulnerable e-scooter riders.<br>
325

Deep Transferable Intelligence for Wearable Big Data Pattern Detection

Kiirthanaa Gangadharan (11197824) 06 August 2021 (has links)
Biomechanical Big Data is of great significance to precision health applications, among which we take special interest in Physical Activity Detection (PAD). In this study, we have performed extensive research on deep learning-based PAD from biomechanical big data, focusing on the challenges raised by the need of real-time edge inference. First, considering there are many places we can place the motion sensors, we have thoroughly compared and analyzed the location difference in terms of deep learning-based PAD performance. We have further compared the difference among six sensor channels (3-axis accelerometer and 3-axis gyroscope). Second, we have selected the optimal sensor and the optimal sensor channel, which can not only provide sensor usage suggestions but also enable ultra-low-power application on the edge. Third, we have investigated innovative methods to minimize the training effort of the deep learning model, leveraging the transfer learning strategy. More specifically, we propose to pre-train a transferable deep learning model using the data from other subjects and then fine-tune the model using limited data from the target-user. In such a way, we have found that, for single-channel case, the transfer learning can effectively increase the deep model performance even when the fine-tuning effort is very small. This research, demonstrated by comprehensive experimental evaluation, have shown the potential of ultra-low-power PAD with minimized sensor stream and minimized training effort.
326

Intéropérabilité sémantique dans le domaine du diagnostic in vitro : Représentation des Connaissances et Alignement

Mary, Melissa 23 October 2017 (has links)
La centralisation des données patients au sein de répertoires numériques soulève des problématiques d’interopérabilité avec les différents systèmes d’information médicaux tels que ceux utilisés en clinique, à la pharmacie ou dans les laboratoires d’analyse. Les instances de santé publique, en charge de développer et de déployer ces dossiers, recommandent l’utilisation de standards pour structurer (syntaxe) et coder l’information (sémantique). Pour les données du diagnostic in vitro (DIV) deux standards sémantiques sont largement préconisés : - la terminologie LOINC® (Logical Observation Identifier Names and Codes) pour représenter les tests de laboratoire ;- l’ontologie SNOMED CT® (Systematized Nomenclature Of MEDicine Clinical Terms) pour exprimer les résultats observés.Ce travail de thèse s’articule autour des problématiques d’interopérabilité sémantique en microbiologie clinique avec deux axes principaux : Comment aligner un Système Organisé de Connaissances du DIV en microbiologie avec l’ontologie SNOMED CT® ? Pour répondre à cet objectif j’ai pris le parti dans mon travail de thèse de développer des méthodologies d’alignement adaptées aux données du diagnostic in vitro plutôt que de proposer une méthode spécifique à l’ontologie SNOMED CT®. Les méthodes usuelles pour l’alignement d’ontologies ont été évaluées sur un alignement de référence entreLOINC® et SNOMED CT®. Les plus pertinentes sont implémentées dans une librairie R, qui sert de point de départ pour créer de nouveaux alignements au sein de bioMérieux. Quels sont les bénéfices et limites d’une représentation formelle des connaissances du DIV ? Pour répondre à cet objectif je me suis intéressée à la formalisation du couple <Test—Résultat>(Observation) au sein d’un compte-rendu de laboratoire. J’ai proposé un formalisme logique pour représenter les tests de la terminologie LOINC® qui a permis de montrer les bénéfices d’une représentation ontologique pour classer et requêter les tests. Dans un second temps, j’ai formalisé un patron d’observations compatible avec l’ontologie SNOMED CT® et aligné sur lesconcepts de la top-ontologie BioTopLite2. Enfin, le patron d’observation a été évaluée afin d’être utilisé au sein des systèmes d’aide à la décision en microbiologie clinique. Pour résumer, ma thèse s’inscrit dans une dynamique de partage et réutilisation des données patients. Les problématiques d’interopérabilité sémantique et de formalisation des connaissances dans le domaine du diagnostic in vitro freinent aujourd’hui encore le développement de systèmes experts. Mes travaux de recherche ont permis de lever certains de ces verrous et pourront être réutilisés dans de nouveaux systèmes intelligents en microbiologie clinique afin de surveiller par exemple l’émergence de bactéries multi-résistantes, et adapter en conséquence des thérapies antibiotiques. / The centralization of patient data in different digital repositories raises issues of interoperability with the different medical information systems, such as those used in clinics, pharmacies or in medical laboratories. The public health authorities, charged with developing and implementing these repositories, recommend the use of standards to structure (syntax) and encode (semantic) health information. For data from in vitro diagnostics (IVD) two standards are recommended: - the LOINC® terminology (Logical Observation Identifier Names and Codes) to represent laboratory tests;- the SNOMED CT® ontology (Systematized Nomenclature Of MEDicine Clinical Terms) to express the observed results.This thesis focuses on the semantic interoperability problems in clinical microbiology with two major axes: How can an IVD Knowledge Organization System be aligned with SNOMED CT®? To answer this, I opted for the development of alignment methodologies adapted to the in vitro diagnostic data rather than proposing a specific method for the SNOMED CT®. The common alignment methods are evaluated on a gold standard alignment between LOINC® and SNOMED CT®. Themost appropriate are implemented in an R library which serves as a starting point to create new alignments at bioMérieux.What are the advantages and limits of a formal representation of DIV knowledge? To answer this, I looked into the formalization of the couple ‘test-result’ (observation) in a laboratory report. I proposed a logical formalization to represent the LOINC® terminology and I demonstrated the advantages of an ontological representation to sort and query laboratory tests. As a second step, I formalized an observation pattern compatible with the SNOMED CT® ontology and aligned onthe concept of the top-ontology BioTopLite2. Finally, the observation pattern was evaluated in order to be used within clinical microbiology expert systems. To resume, my thesis addresses some issues on IVD patient data share and reuse. At present, the problems of semantic interoperability and knowledge formalization in the field of in vitro diagnostics hampers the development of expert systems. My research has enabled some of the obstacles to be raised and could be used in new intelligent clinical microbiology systems, for example in order to be able to monitor the emergence of multi resistant bacteria and consequently adapt antibiotic therapies.
327

Knowledge Extraction from Description Logic Terminologies / Extraction de connaissances à partir de terminologies en logique de description

Chen, Jieying 30 November 2018 (has links)
Un nombre croissant d'ontologies de grandes tailles ont été développées et mises à disposition dans des référentiels tels que le NCBO Bioportal. L'accès aux connaissances les plus pertinentes contenues dans les grandes ontologies a été identifié comme un défi important. À cette fin, nous proposons dans cette thèse trois notions différentes : modules d’ontologie minimale (sous-ontologies conservant toutes les implications sur un vocabulaire donné), meilleurs extraits ontologiques (certains petits nombres d’axiomes qui capturent le mieux les connaissances sur le vocabulaire permettant un degré de perte sémantique) et un module de projection (sous-ontologies d'une ontologie cible qui impliquent la subsomption, les requêtes d'instance et les requêtes conjonctives issues d'une ontologie de référence). Pour calculer le module minimal et le meilleur extrait, nous introduisons la notion de justification de subsomption en tant qu'extension de la justification (ensemble minimal d'axiomes nécessaires pour conserver une conséquence logique) pour capturer la connaissance de subsomption entre un terme et tous les autres termes du vocabulaire. De même, nous introduisons la notion de justifications de projection qui impliquent une conséquence pour trois requêtes différentes afin de calculer le module de projection. Enfin, nous évaluons nos approches en appliquant une implémentation prototype sur de grandes ontologies. / An increasing number of ontologies of large sizes have been developed and made available in repositories such as the NCBO Bioportal. Ensuring access to the most relevant knowledge contained in large ontologies has been identified as an important challenge. To this end, in this thesis, we propose three different notions: minimal ontology modules (sub-ontologies that preserve all entailments over a given vocabulary), best ontology excerpts (certain, small number of axioms that best capture the knowledge regarding the vocabulary by allowing for a degree of semantic loss) and projection module (sub-ontologies of a target ontology that entail subsumption, instance and conjunctive queries that follow from a reference ontology). For computing minimal module and best excerpt, we introduce the notion of subsumption justification as an extension of justification (a minimal set of axioms needed to preserve a logical consequence) to capture the subsumption knowledge between a term and all other terms in the vocabulary. Similarly, we introduce the notion of projection justifications that entail consequence for three different queries in order to computing projection module. Finally, we evaluate our approaches by applying a prototype implementation on large ontologies.
328

Using Latent Discourse Indicators to identify goodness in online conversations

Ayush Jain (6012219) 16 January 2020 (has links)
In this work, we model latent discourse indicators to classify constructive and collaborative conversations online. Such conversations are considered good as they are rich in content and have a sense of direction to resolve an issue, solve a problem or gain new insights and knowledge. These unique discourse indicators are able to characterize flow of information, sentiment and community structure within discussions. We build a deep relational model that captures these complex discourse behaviors as latent variables and make a global prediction about overall conversation based on these higher level discourse behaviors. DRaiL, a Declarative Deep Relational Learning platform built on PyTorch, is used for our task in which relevant discourse behaviors are formulated as discrete latent variables and scored using a deep model. These variables capture the nuances involved in online conversations and provide the information needed for predicting the presence or absence of collaborative and constructive characterization in the entire conversational thread. We show that the joint modeling of such competing latent behaviors results in a performance improvement over the traditional direct classification methods in which all the raw features are just combined together to predict the final decision. The Yahoo News Annotated Comments Corpus is used as a dataset containing discussions on Yahoo news forums and final labels are annotated based on our precise and restricted definitions of positively labeled conversations. We formulated our annotation guidelines based on a sample set of conversations and resolved any conflict in specific annotation by revisiting those examples again.
329

Predictive Visual Analytics of Social Media Data for Supporting Real-time Situational Awareness

Luke Snyder (8764473) 01 May 2020 (has links)
<div>Real-time social media data can provide useful information on evolving events and situations. In addition, various domain users are increasingly leveraging real-time social media data to gain rapid situational awareness. Informed by discussions with first responders and government officials, we focus on two major barriers limiting the widespread adoption of social media for situational awareness: the lack of geotagged data and the deluge of irrelevant information during events. Geotags are naturally useful, as they indicate the location of origin and provide geographic context. Only a small portion of social media is geotagged, however, limiting its practical use for situational awareness. The deluge of irrelevant data provides equal difficulties, impeding the effective identification of semantically relevant information. Existing methods for short text relevance classification fail to incorporate users' knowledge into the classification process. Therefore, classifiers cannot be interactively retrained for specific events or user-dependent needs in real-time, limiting situational awareness. In this work, we first adapt, improve, and evaluate a state-of-the-art deep learning model for city-level geolocation prediction, and integrate it with a visual analytics system tailored for real-time situational awareness. We then present a novel interactive learning framework in which users rapidly identify relevant data by iteratively correcting the relevance classification of tweets in real-time. We integrate our framework with the extended Social Media Analytics and Reporting Toolkit (SMART) 2.0 system, allowing the use of our interactive learning framework within a visual analytics system adapted for real-time situational awareness.</div>
330

Real-Time Precise Damage Characterization in Self-Sensing Materials via Neural Network-Aided Electrical Impedance Tomography: A Computational Study

Lang Zhao (8790224) 05 May 2020 (has links)
Many cases have evinced the importance of having structural health monitoring (SHM) strategies that can allow the detection of the structural health of infrastructures or buildings, in order to prevent the potential economic or human losses. Nanocomposite material like the Carbon nanofiller-modified composites have great potential for SHM because these materials are piezoresistive. So, it is possible to determine the damage status of the material by studying the conductivity change distribution, and this is essential for detecting the damage on the position that can-not be observed by eye, for example, the inner layer in the aerofoil. By now, many researchers have studied how damage influences the conductivity of nanocomposite material and the electrical impedance tomography (EIT) method has been applied widely to detect the damage-induced conductivity changes. However, only knowing how to calculate the conductivity change from damage is not enough to SHM, it is more valuable to SHM to know how to determine the mechanical damage that results in the observed conductivity changes. In this article, we apply the machine learning methods to determine the damage status, more specifically, the number, radius and the center position of broken holes on the material specimens by studying the conductivity change data generated by the EIT method. Our results demonstrate that the machine learning methods can accurately and efficiently detect the damage on material specimens by analysing the conductivity change data, this conclusion is important to the field of the SHM and will speed up the damage detection process for industries like the aviation industry and mechanical engineering.

Page generated in 0.2233 seconds