• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 218
  • 71
  • 32
  • 19
  • 10
  • 6
  • 4
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • Tagged with
  • 526
  • 526
  • 146
  • 138
  • 122
  • 121
  • 118
  • 109
  • 102
  • 100
  • 96
  • 82
  • 79
  • 64
  • 64
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
401

Deep Transferable Intelligence for Wearable Big Data Pattern Detection

Kiirthanaa Gangadharan (11197824) 06 August 2021 (has links)
Biomechanical Big Data is of great significance to precision health applications, among which we take special interest in Physical Activity Detection (PAD). In this study, we have performed extensive research on deep learning-based PAD from biomechanical big data, focusing on the challenges raised by the need of real-time edge inference. First, considering there are many places we can place the motion sensors, we have thoroughly compared and analyzed the location difference in terms of deep learning-based PAD performance. We have further compared the difference among six sensor channels (3-axis accelerometer and 3-axis gyroscope). Second, we have selected the optimal sensor and the optimal sensor channel, which can not only provide sensor usage suggestions but also enable ultra-low-power application on the edge. Third, we have investigated innovative methods to minimize the training effort of the deep learning model, leveraging the transfer learning strategy. More specifically, we propose to pre-train a transferable deep learning model using the data from other subjects and then fine-tune the model using limited data from the target-user. In such a way, we have found that, for single-channel case, the transfer learning can effectively increase the deep model performance even when the fine-tuning effort is very small. This research, demonstrated by comprehensive experimental evaluation, have shown the potential of ultra-low-power PAD with minimized sensor stream and minimized training effort.
402

Intéropérabilité sémantique dans le domaine du diagnostic in vitro : Représentation des Connaissances et Alignement

Mary, Melissa 23 October 2017 (has links)
La centralisation des données patients au sein de répertoires numériques soulève des problématiques d’interopérabilité avec les différents systèmes d’information médicaux tels que ceux utilisés en clinique, à la pharmacie ou dans les laboratoires d’analyse. Les instances de santé publique, en charge de développer et de déployer ces dossiers, recommandent l’utilisation de standards pour structurer (syntaxe) et coder l’information (sémantique). Pour les données du diagnostic in vitro (DIV) deux standards sémantiques sont largement préconisés : - la terminologie LOINC® (Logical Observation Identifier Names and Codes) pour représenter les tests de laboratoire ;- l’ontologie SNOMED CT® (Systematized Nomenclature Of MEDicine Clinical Terms) pour exprimer les résultats observés.Ce travail de thèse s’articule autour des problématiques d’interopérabilité sémantique en microbiologie clinique avec deux axes principaux : Comment aligner un Système Organisé de Connaissances du DIV en microbiologie avec l’ontologie SNOMED CT® ? Pour répondre à cet objectif j’ai pris le parti dans mon travail de thèse de développer des méthodologies d’alignement adaptées aux données du diagnostic in vitro plutôt que de proposer une méthode spécifique à l’ontologie SNOMED CT®. Les méthodes usuelles pour l’alignement d’ontologies ont été évaluées sur un alignement de référence entreLOINC® et SNOMED CT®. Les plus pertinentes sont implémentées dans une librairie R, qui sert de point de départ pour créer de nouveaux alignements au sein de bioMérieux. Quels sont les bénéfices et limites d’une représentation formelle des connaissances du DIV ? Pour répondre à cet objectif je me suis intéressée à la formalisation du couple <Test—Résultat>(Observation) au sein d’un compte-rendu de laboratoire. J’ai proposé un formalisme logique pour représenter les tests de la terminologie LOINC® qui a permis de montrer les bénéfices d’une représentation ontologique pour classer et requêter les tests. Dans un second temps, j’ai formalisé un patron d’observations compatible avec l’ontologie SNOMED CT® et aligné sur lesconcepts de la top-ontologie BioTopLite2. Enfin, le patron d’observation a été évaluée afin d’être utilisé au sein des systèmes d’aide à la décision en microbiologie clinique. Pour résumer, ma thèse s’inscrit dans une dynamique de partage et réutilisation des données patients. Les problématiques d’interopérabilité sémantique et de formalisation des connaissances dans le domaine du diagnostic in vitro freinent aujourd’hui encore le développement de systèmes experts. Mes travaux de recherche ont permis de lever certains de ces verrous et pourront être réutilisés dans de nouveaux systèmes intelligents en microbiologie clinique afin de surveiller par exemple l’émergence de bactéries multi-résistantes, et adapter en conséquence des thérapies antibiotiques. / The centralization of patient data in different digital repositories raises issues of interoperability with the different medical information systems, such as those used in clinics, pharmacies or in medical laboratories. The public health authorities, charged with developing and implementing these repositories, recommend the use of standards to structure (syntax) and encode (semantic) health information. For data from in vitro diagnostics (IVD) two standards are recommended: - the LOINC® terminology (Logical Observation Identifier Names and Codes) to represent laboratory tests;- the SNOMED CT® ontology (Systematized Nomenclature Of MEDicine Clinical Terms) to express the observed results.This thesis focuses on the semantic interoperability problems in clinical microbiology with two major axes: How can an IVD Knowledge Organization System be aligned with SNOMED CT®? To answer this, I opted for the development of alignment methodologies adapted to the in vitro diagnostic data rather than proposing a specific method for the SNOMED CT®. The common alignment methods are evaluated on a gold standard alignment between LOINC® and SNOMED CT®. Themost appropriate are implemented in an R library which serves as a starting point to create new alignments at bioMérieux.What are the advantages and limits of a formal representation of DIV knowledge? To answer this, I looked into the formalization of the couple ‘test-result’ (observation) in a laboratory report. I proposed a logical formalization to represent the LOINC® terminology and I demonstrated the advantages of an ontological representation to sort and query laboratory tests. As a second step, I formalized an observation pattern compatible with the SNOMED CT® ontology and aligned onthe concept of the top-ontology BioTopLite2. Finally, the observation pattern was evaluated in order to be used within clinical microbiology expert systems. To resume, my thesis addresses some issues on IVD patient data share and reuse. At present, the problems of semantic interoperability and knowledge formalization in the field of in vitro diagnostics hampers the development of expert systems. My research has enabled some of the obstacles to be raised and could be used in new intelligent clinical microbiology systems, for example in order to be able to monitor the emergence of multi resistant bacteria and consequently adapt antibiotic therapies.
403

Knowledge Extraction from Description Logic Terminologies / Extraction de connaissances à partir de terminologies en logique de description

Chen, Jieying 30 November 2018 (has links)
Un nombre croissant d'ontologies de grandes tailles ont été développées et mises à disposition dans des référentiels tels que le NCBO Bioportal. L'accès aux connaissances les plus pertinentes contenues dans les grandes ontologies a été identifié comme un défi important. À cette fin, nous proposons dans cette thèse trois notions différentes : modules d’ontologie minimale (sous-ontologies conservant toutes les implications sur un vocabulaire donné), meilleurs extraits ontologiques (certains petits nombres d’axiomes qui capturent le mieux les connaissances sur le vocabulaire permettant un degré de perte sémantique) et un module de projection (sous-ontologies d'une ontologie cible qui impliquent la subsomption, les requêtes d'instance et les requêtes conjonctives issues d'une ontologie de référence). Pour calculer le module minimal et le meilleur extrait, nous introduisons la notion de justification de subsomption en tant qu'extension de la justification (ensemble minimal d'axiomes nécessaires pour conserver une conséquence logique) pour capturer la connaissance de subsomption entre un terme et tous les autres termes du vocabulaire. De même, nous introduisons la notion de justifications de projection qui impliquent une conséquence pour trois requêtes différentes afin de calculer le module de projection. Enfin, nous évaluons nos approches en appliquant une implémentation prototype sur de grandes ontologies. / An increasing number of ontologies of large sizes have been developed and made available in repositories such as the NCBO Bioportal. Ensuring access to the most relevant knowledge contained in large ontologies has been identified as an important challenge. To this end, in this thesis, we propose three different notions: minimal ontology modules (sub-ontologies that preserve all entailments over a given vocabulary), best ontology excerpts (certain, small number of axioms that best capture the knowledge regarding the vocabulary by allowing for a degree of semantic loss) and projection module (sub-ontologies of a target ontology that entail subsumption, instance and conjunctive queries that follow from a reference ontology). For computing minimal module and best excerpt, we introduce the notion of subsumption justification as an extension of justification (a minimal set of axioms needed to preserve a logical consequence) to capture the subsumption knowledge between a term and all other terms in the vocabulary. Similarly, we introduce the notion of projection justifications that entail consequence for three different queries in order to computing projection module. Finally, we evaluate our approaches by applying a prototype implementation on large ontologies.
404

Using Latent Discourse Indicators to identify goodness in online conversations

Ayush Jain (6012219) 16 January 2020 (has links)
In this work, we model latent discourse indicators to classify constructive and collaborative conversations online. Such conversations are considered good as they are rich in content and have a sense of direction to resolve an issue, solve a problem or gain new insights and knowledge. These unique discourse indicators are able to characterize flow of information, sentiment and community structure within discussions. We build a deep relational model that captures these complex discourse behaviors as latent variables and make a global prediction about overall conversation based on these higher level discourse behaviors. DRaiL, a Declarative Deep Relational Learning platform built on PyTorch, is used for our task in which relevant discourse behaviors are formulated as discrete latent variables and scored using a deep model. These variables capture the nuances involved in online conversations and provide the information needed for predicting the presence or absence of collaborative and constructive characterization in the entire conversational thread. We show that the joint modeling of such competing latent behaviors results in a performance improvement over the traditional direct classification methods in which all the raw features are just combined together to predict the final decision. The Yahoo News Annotated Comments Corpus is used as a dataset containing discussions on Yahoo news forums and final labels are annotated based on our precise and restricted definitions of positively labeled conversations. We formulated our annotation guidelines based on a sample set of conversations and resolved any conflict in specific annotation by revisiting those examples again.
405

Predictive Visual Analytics of Social Media Data for Supporting Real-time Situational Awareness

Luke Snyder (8764473) 01 May 2020 (has links)
<div>Real-time social media data can provide useful information on evolving events and situations. In addition, various domain users are increasingly leveraging real-time social media data to gain rapid situational awareness. Informed by discussions with first responders and government officials, we focus on two major barriers limiting the widespread adoption of social media for situational awareness: the lack of geotagged data and the deluge of irrelevant information during events. Geotags are naturally useful, as they indicate the location of origin and provide geographic context. Only a small portion of social media is geotagged, however, limiting its practical use for situational awareness. The deluge of irrelevant data provides equal difficulties, impeding the effective identification of semantically relevant information. Existing methods for short text relevance classification fail to incorporate users' knowledge into the classification process. Therefore, classifiers cannot be interactively retrained for specific events or user-dependent needs in real-time, limiting situational awareness. In this work, we first adapt, improve, and evaluate a state-of-the-art deep learning model for city-level geolocation prediction, and integrate it with a visual analytics system tailored for real-time situational awareness. We then present a novel interactive learning framework in which users rapidly identify relevant data by iteratively correcting the relevance classification of tweets in real-time. We integrate our framework with the extended Social Media Analytics and Reporting Toolkit (SMART) 2.0 system, allowing the use of our interactive learning framework within a visual analytics system adapted for real-time situational awareness.</div>
406

Real-Time Precise Damage Characterization in Self-Sensing Materials via Neural Network-Aided Electrical Impedance Tomography: A Computational Study

Lang Zhao (8790224) 05 May 2020 (has links)
Many cases have evinced the importance of having structural health monitoring (SHM) strategies that can allow the detection of the structural health of infrastructures or buildings, in order to prevent the potential economic or human losses. Nanocomposite material like the Carbon nanofiller-modified composites have great potential for SHM because these materials are piezoresistive. So, it is possible to determine the damage status of the material by studying the conductivity change distribution, and this is essential for detecting the damage on the position that can-not be observed by eye, for example, the inner layer in the aerofoil. By now, many researchers have studied how damage influences the conductivity of nanocomposite material and the electrical impedance tomography (EIT) method has been applied widely to detect the damage-induced conductivity changes. However, only knowing how to calculate the conductivity change from damage is not enough to SHM, it is more valuable to SHM to know how to determine the mechanical damage that results in the observed conductivity changes. In this article, we apply the machine learning methods to determine the damage status, more specifically, the number, radius and the center position of broken holes on the material specimens by studying the conductivity change data generated by the EIT method. Our results demonstrate that the machine learning methods can accurately and efficiently detect the damage on material specimens by analysing the conductivity change data, this conclusion is important to the field of the SHM and will speed up the damage detection process for industries like the aviation industry and mechanical engineering.
407

MACHINE LEARNING MODEL FOR ESTIMATION OF SYSTEM PROPERTIES DURING CYCLING OF COAL-FIRED STEAM GENERATOR

Abhishek Navarkar (8790188) 06 May 2020 (has links)
The intermittent nature of renewable energy, variations in energy demand, and fluctuations in oil and gas prices have all contributed to variable demand for power generation from coal-burning power plants. The varying demand leads to load-follow and on/off operations referred to as cycling. Cycling causes transients of properties such as pressure and temperature within various components of the steam generation system. The transients can cause increased damage because of fatigue and creep-fatigue interactions shortening the life of components. The data-driven model based on artificial neural networks (ANN) is developed for the first time to estimate properties of the steam generator components during cycling operations of a power plant. This approach utilizes data from the Coal Creek Station power plant located in North Dakota, USA collected over 10 years with a 1-hour resolution. Cycling characteristics of the plant are identified using a time-series of gross power. The ANN model estimates the component properties, for a given gross power profile and initial conditions, as they vary during cycling operations. As a representative example, the ANN estimates are presented for the superheater outlet pressure, reheater inlet temperature, and flue gas temperature at the air heater inlet. The changes in these variables as a function of the gross power over the time duration are compared with measurements to assess the predictive capability of the model. Mean square errors of 4.49E-04 for superheater outlet pressure, 1.62E-03 for reheater inlet temperature, and 4.14E-04 for flue gas temperature at the air heater inlet were observed.
408

Proton to proteome, a multi-scale investigation of drug discovery

Jonathan A Fine (7027766) 08 May 2020 (has links)
Chemical science spans multiple scales, from a single proton to the collection of proteins that make up a proteome. Throughout my graduate research career, I have developed statistical and machine learning models to better understand chemistry at these different scales, including predicting molecular properties of molecules in analytical and synthetic chemistry to integrating experiments with chemo-proteomic based machine models for drug design. Starting with the proteome, I will discuss repurposing compounds for mental health indications and visualizing the relationships between these disorders. Moving to the cellular level, I will introduce the use of the negative binomial distribution to find biomarkers collected using MS/MS and machine learning models (ML) used to select potent, non-toxic, small molecules for the treatment of castration--resistant prostate cancer (CRPC). For the protein scale, I will introduce CANDOCK, a docking method to rapidly and accurately dock small molecules, an algorithm which was used to create the ML model for CRPC. Next, I will showcase a deep learning model to determine small-molecule functional groups using FTIR and MS spectra. This will be followed by a similar approach used to identify if a small molecule will undergo a diagnostic reaction using mass spectrometry using a chemically interpretable graph-based machine learning method. Finally, I will examine chemistry at the proton level and how quantum mechanics combined with machine learning can be used to understand chemical reactions. I believe that chemical machine learning models have the potential to accelerate several aspects of drug discovery including discovery, process, and analytical chemistry.
409

Deep Learning Based User Models for Interactive Optimization of Watershed Designs

Andrew Paul Hoblitzell (8086769) 11 December 2019 (has links)
<p>This dissertation combines stakeholder and analytical intelligence for consensus decision-making via an interactive optimization process. This dissertation outlines techniques for developing user models of subjective criteria of human stakeholders for an environmental decision support system called WRESTORE. The dissertation compares several user modeling techniques and develops methods for incorporating such user models selectively for interactive optimization, combining multiple objective and subjective criteria. </p><p>This dissertation describes additional functionality for our watershed planning system, called WRESTORE (Watershed REstoration Using Spatio-Temporal Optimization of REsources) (http://wrestore.iupui.edu). Techniques for performing the interactive optimization process in the presence of limited data are described. This work adds a user modeling component that develops a computational model of a stakeholder’s preferences and then integrates the user model component into the decision support system. <br></p><p>Our system is one of many decision support systems and is dependent upon stake- holder interaction. The user modeling component within the system utilizes deep learning, which can be challenging with limited data. Our work integrates user models with limited data with application-specific techniques to address some of these challenges. The dissertation describes steps for implementing accurate virtual stakeholder models based on limited training data. </p><p>Another method for dealing with limited data, based upon computing training data uncertainty, is also presented in this dissertation. Results presented show more stable convergence in fewer iterations when using an uncertainty-based incremental sampling method than when using stability based sampling or random sampling. The technique is described in additional detail. </p><p>The dissertation also discusses non-stationary reinforcement-based feature selection for the interactive optimization component of our system. The presented results indicate that the proposed feature selection approach can effectively mitigate against superfluous and adversarial dimensions which if left untreated can lead to degradation in both computational performance and interactive optimization performance against analytically determined environmental fitness functions. </p><p>The contribution of this dissertation lays the foundation for developing a framework for multi-stakeholder consensus decision-making in the presence of limited data.</p>
410

A MACHINE LEARNING BASED WEB SERVICE FOR MALICIOUS URL DETECTION IN A BROWSER

Hafiz Muhammad Junaid Khan (8119418) 12 December 2019 (has links)
Malicious URLs pose serious cyber-security threats to the Internet users. It is critical to detect malicious URLs so that they could be blocked from user access. In the past few years, several techniques have been proposed to differentiate malicious URLs from benign ones with the help of machine learning. Machine learning algorithms learn trends and patterns in a data-set and use them to identify any anomalies. In this work, we attempt to find generic features for detecting malicious URLs by analyzing two publicly available malicious URL data-sets. In order to achieve this task, we identify a list of substantial features that can be used to classify all types of malicious URLs. Then, we select the most significant lexical features by using Chi-Square and ANOVA based statistical tests. The effectiveness of these feature sets is then tested by using a combination of single and ensemble machine learning algorithms. We build a machine learning based real-time malicious URL detection system as a web service to detect malicious URLs in a browser. We implement a chrome extension that intercepts a browser’s URL requests and sends them to web service for analysis. We implement the web service as well that classifies a URL as benign or malicious using the saved ML model. We also evaluate the performance of our web service to test whether the service is scalable.

Page generated in 0.1115 seconds