• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 107
  • 9
  • 5
  • 5
  • 3
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 158
  • 158
  • 83
  • 65
  • 60
  • 44
  • 38
  • 25
  • 22
  • 21
  • 20
  • 18
  • 18
  • 17
  • 17
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
121

Real time intelligent decision making from heterogeneous and imperfect data / La prise de décision intelligente en temps réel à partir de données hétérogènes et imparfaites

Sfar, Hela 09 July 2019 (has links)
De nos jours, l'informatique omniprésente fait face à un progrès croissant. Ce paradigme est caractérisé par de multiples capteurs intégrés dans des objets du monde physique. Le développement d'applications personnelles utilisant les données fournies par ces capteurs a conduit à la création d'environnements intelligents, conçus comme un framework de superposition avancé qui aide de manière proactive les individus dans leur vie quotidienne. Une application d’environnement intelligent collecte les données de capteurs deployés d'une façon en continu , traite ces données et les analyse avant de prendre des décisions pour exécuter des actions sur l’environnement physique. Le traitement de données en ligne consiste principalement en une segmentation des données pour les diviser en fragments. Généralement, dans la littérature, la taille des fragments est fixe. Cependant, une telle vision statique entraîne généralement des problèmes de résultats imprécis. Par conséquent, la segmentation dynamique utilisant des tailles variables de fenêtres d’observation est une question ouverte. La phase d'analyse prend en entrée un segment de données de capteurs et extrait des connaissances au moyen de processus de raisonnement ou d'extraction. La compréhension des activités quotidiennes des utilisateurs et la prévention des situations anormales sont une préoccupation croissante dans la littérature, mais la résolution de ces problèmes à l'aide de données de petite taille et imparfaites reste un problème clé. En effet, les données fournies par les capteurs sont souvent imprécises, inexactes, obsolètes, contradictoires ou tout simplement manquantes. Par conséquent, l'incertitude liée à la gestion est devenue un aspect important. De plus, il n'est pas toujours possible et trop intrusif de surveiller l'utilisateur pour obtenir une grande quantité de données sur sa routine de vie. Les gens ne sont pas souvent ouverts pour être surveillés pendant une longue période. Évidemment, lorsque les données acquises sur l'utilisateur sont suffisantes, la plupart des méthodes existantes peuvent fournir une reconnaissance précise, mais les performances baissent fortement avec de petits ensembles de données. Dans cette thèse, nous avons principalement exploré la fertilisation croisée d'approches d'apprentissage statistique et symbolique et les contributions sont triples: (i) DataSeg, un algorithme qui tire parti à la fois de l'apprentissage non supervisé et de la représentation ontologique pour la segmentation des données. Cette combinaison choisit de manière dynamique la taille de segment pour plusieurs applications, contrairement à la plupart des méthodes existantes. De plus, contrairement aux approches de la littérature, Dataseg peut être adapté à toutes les fonctionnalités de l’application; (ii) AGACY Monitoring, un modèle hybride de reconnaissance d'activité et de gestion des incertitudes qui utilise un apprentissage supervisé, une inférence de logique possibiliste et une ontologie permettant d'extraire des connaissances utiles de petits ensembles de données; (iii) CARMA, une méthode basée sur les réseaux de Markov et les règles d'association causale pour détecter les causes d'anomalie dans un environnement intelligent afin d'éviter leur apparition. En extrayant automatiquement les règles logiques concernant les causes d'anomalies et en les intégrant dans les règles MLN, nous parvenons à une identification plus précise de la situation, même avec des observations partielles. Chacune de nos contributions a été prototypée, testée et validée à l'aide de données obtenues à partir de scénarios réels réalisés. / Nowadays, pervasive computing is facing an increasing advancement. This paradigm is characterized by multiple sensors highly integrated in objects of the physical world.The development of personal applications using data provided by these sensors has prompted the creation of smart environments, which are designed as an overlay advanced framework that proactively, but sensibly, assist individuals in their every day lives. A smart environment application gathers streaming data from the deployed sensors, processes and analyzes the collected data before making decisions and executing actions on the physical environment. Online data processing consists mainly in data segmentation to divide data into fragments. Generally, in the literature, the fragment size is fixed. However, such static vision usually brings issues of imprecise outputs. Hence, dynamic segmentation using variable sizes of observation windows is an open issue. The analysis phase takes as input a segment of sensor data and extract knowledge by means of reasoning or mining processes. In particular, understanding user daily activities and preventing anomalous situations are a growing concern in the literature but addressing these problems with small and imperfect data is still a key issue. Indeed, data provided by sensors is often imprecise, inaccurate, outdated, in contradiction, or simply missing. Hence, handling uncertainty became an important aspect. Moreover, monitoring the user to obtain a large amount of data about his/her life routine is not always possible and too intrusive. People are not often open to be monitored for a long period of time. Obviously, when the acquired data about the user are sufficient, most existing methods can provide precise recognition but the performances decline sharply with small datasets.In this thesis, we mainly explored cross-fertilization of statistic and symbolic learning approaches and the contributions are threefold: (i) DataSeg, an algorithm that takes advantage of both unsupervised learning and ontology representation for data segmentation. This combination chooses dynamically the segment size for several applications unlike most of existing methods. Moreover, unlike the literature approaches, Dataseg is able to be adapted to any application features; (ii) AGACY Monitoring, a hybrid model for activity recognition and uncertainty handling which uses supervised learning, possibilistic logic inference, and an ontology to extract meaningful knowledge from small datasets; (iii) CARMA, a method based on Markov Logic Networks (MLN) and causal association rules to detect anomaly causes in a smart environment so as to prevent their occurrence. By automatically extracting logic rules about anomalies causes and integrating them in the MLN rules, we reach a more accurate situation identification even with partial observations. Each of our contributions was prototyped, tested and validated through data obtained from real scenarios that are realized.
122

FAZT: FEW AND ZERO-SHOT FRAMEWORK TO LEARN TEMPO-VISUAL EVENTS FROM LITTLE OR NO DATA

Naveen Madapana (11613925) 20 December 2021 (has links)
<div>Supervised classification methods based on deep learning have achieved great success in many domains and tasks that are previously unimaginable. Such approaches build on learning paradigms that require hundreds of examples in order to learn to classify objects or events. Thus, their immediate application to the domains with few or no observations is limited. This is because of the lack of ability to rapidly generalize to new categories from a few examples or from high-level descriptions of categories. This can be attributed to the significant gap between the way machines represent knowledge and the way humans represent categories in their minds and learn to recognize them. In this context, this research represents categories as semantic trees in a high-level attribute space and proposes an approach to utilize these representations to conduct N-Shot, Few-Shot, One-Shot, and Zero-Shot Learning (ZSL). This work refers to this paradigm as the problem of general classification (GCP) and proposes a unified framework for GCP referred to as the Few and Zero-Shot Technique (FAZT). FAZT framework is an end-to-end approach that uses trainable 3D convolutional neural networks and recurrent neural networks to simultaneously optimize for both the semantic and the classification tasks. Lastly, the problem of systematically obtaining semantic attributes by utilizing domain-specific ontologies is presented. The proposed framework is validated in the domains of hand gesture and action/activity recognition, however, this research can be applied to other domains such as video understanding, the study of human behavior, emotion recognition, etc. First, an attribute-based dataset for gestures is developed in a systematic manner by relying on literature in gestures and semantics, and crowdsourced platforms such as Amazon Mechanical Turk. To the best of our knowledge, this is the first ZSL dataset for hand gestures (ZSGL dataset). Next, our framework is evaluated in two experimental conditions: 1. Within-category (to test the attribute recognition power) and 2. Across-category (to test the ability to recognize an unknown category). In addition, we conducted experiments in zero-shot, one-shot, few-shot and continuous learning conditions in both open-set and closed-set scenarios. Results showed that our framework performs favorably on the ZSGL, Kinetics, UIUC Action, UCF101 and HMDB51 action datasets in all the experimental conditions.<br></div><div><br></div>
123

Inteligentní rozpoznání činnosti uživatele chytrého telefonu / Intelligent Recognition of the Smartphone User's Activity

Pustka, Michal January 2018 (has links)
This thesis deals with real-time human activity recognition (eg, running, walking, driving, etc.) using sensors which are available on current mobile devices. The final product of this thesis consists of multiple parts. First, an application for collecting sensor data from mobile devices. Followed by a tool for preprocessing of collected data and creation of a data set. The main part of the thesis is the design of convolutional neural network for activity classification and subsequent use of this network in an Android mobile application. The combination of previous parts creates a comprehensive framework for detection of user activities. Finally, some interesting experiments were made and evaluated (eg, the influence of specific sensors on detection precision).
124

Rozpoznávání lidské aktivity s pomocí senzorů v chytrém telefonu / Human Activity Recognition Using Smartphone

Novák, Andrej January 2016 (has links)
The increase of mobile smartphones continues to grow and with it the demand for automation and use of the most offered aspects of the phone, whether in medicine (health care and surveillance) or in user applications (automatic recognition of position, etc.). As part of this work has been created the designs and implementation of the system for the recognition of human activity on the basis of data processing from sensors of smartphones, along with the determination of the optimal parameters, recovery success rate and comparison of individual evaluation. Other benefits include a draft format and displaying numerous training set consisting of real contributions and their manual evaluation. In addition to the main benefits, the software tool was created to allow the validation of the elements of the training set and acquisition of features from this set and software, that is able with the help of deep learning to train models and then test them.
125

E‐Shape Analysis

Sroufe, Paul 12 1900 (has links)
The motivation of this work is to understand E-shape analysis and how it can be applied to various classification tasks. It has a powerful feature to not only look at what information is contained, but rather how that information looks. This new technique gives E-shape analysis the ability to be language independent and to some extent size independent. In this thesis, I present a new mechanism to characterize an email without using content or context called E-shape analysis for email. I explore the applications of the email shape by carrying out a case study; botnet detection and two possible applications: spam filtering and social-context based finger printing. The second part of this thesis takes what I apply E-shape analysis to activity recognition of humans. Using the Android platform and a T-Mobile G1 phone I collect data from the triaxial accelerometer and use it to classify the motion behavior of a subject.
126

Building an Understanding of Human Activities in First Person Video using Fuzzy Inference

Schneider, Bradley A. 23 May 2022 (has links)
No description available.
127

Deep Learning Approach for Extracting Heart Rate Variability from a Photoplethysmographic Signal

Odinsdottir, Gudny Björk, Larsson, Jesper January 2020 (has links)
Photoplethysmography (PPG) is a method to detect blood volume changes in every heartbeat. The peaks in the PPG signal corresponds to the electrical impulses sent by the heart. The duration between each heartbeat varies, and these variances are better known as heart rate variability (HRV). Thus, finding peaks correctly from PPG signals provides the opportunity to measure an accurate HRV. Additional research indicates that deep learning approaches can extract HRV from a PPG signal with significantly greater accuracy compared to other traditional methods. In this study, deep learning classifiers were built to detect peaks in a noise-contaminated PPG signal and to recognize the performed activity during the data recording. The dataset used in this study is provided by the PhysioBank database consisting of synchronized PPG-, acceleration- and gyro data. The models investigated in this study were limited toa one-layer LSTM network with six varying numbers of neurons and four different window sizes. The most accurate model for the peak classification was the model consisting of 256 neurons and a window size of 15 time steps, with a Matthews correlation coefficient (MCC) of 0.74. The model consisted of64 neurons and a window duration of 1.25 seconds resulted in the most accurate activity classification, with an MCC score of 0.63. Concludingly, more optimization of a deep learning approach could lead to promising accuracy on peak detection and thus an accurate measurement of HRV. The probable cause for the low accuracy of the activity classification problem is the limited data used in this study.
128

Automatic Feature Extraction for Human Activity Recognitionon the Edge

Cleve, Oscar, Gustafsson, Sara January 2019 (has links)
This thesis evaluates two methods for automatic feature extraction to classify the accelerometer data of periodic and sporadic human activities. The first method selects features using individual hypothesis tests and the second one is using a random forest classifier as an embedded feature selector. The hypothesis test was combined with a correlation filter in this study. Both methods used the same initial pool of automatically generated time series features. A decision tree classifier was used to perform the human activity recognition task for both methods.The possibility of running the developed model on a processor with limited computing power was taken into consideration when selecting methods for evaluation. The classification results showed that the random forest method was good at prioritizing among features. With 23 features selected it had a macro average F1 score of 0.84 and a weighted average F1 score of 0.93. The first method, however, only had a macro average F1 score of 0.40 and a weighted average F1 score of 0.63 when using the same number of features. In addition to the classification performance this thesis studies the potential business benefits that automation of feature extractioncan result in. / Denna studie utvärderar två metoder som automatiskt extraherar features för att klassificera accelerometerdata från periodiska och sporadiska mänskliga aktiviteter. Den första metoden väljer features genom att använda individuella hypotestester och den andra metoden använder en random forest-klassificerare som en inbäddad feature-väljare. Hypotestestmetoden kombinerades med ett korrelationsfilter i denna studie. Båda metoderna använde samma initiala samling av automatiskt genererade features. En decision tree-klassificerare användes för att utföra klassificeringen av de mänskliga aktiviteterna för båda metoderna. Möjligheten att använda den slutliga modellen på en processor med begränsad hårdvarukapacitet togs i beaktning då studiens metoder valdes. Klassificeringsresultaten visade att random forest-metoden hade god förmåga att prioritera bland features. Med 23 utvalda features erhölls ett makromedelvärde av F1 score på 0,84 och ett viktat medelvärde av F1 score på 0,93. Hypotestestmetoden resulterade i ett makromedelvärde av F1 score på 0,40 och ett viktat medelvärde av F1 score på 0,63 då lika många features valdes ut. Utöver resultat kopplade till klassificeringsproblemet undersöker denna studie även potentiella affärsmässiga fördelar kopplade till automatisk extrahering av features.
129

Data-Driven Simulation Modeling of Construction and Infrastructure Operations Using Process Knowledge Discovery

Akhavian, Reza 01 January 2015 (has links)
Within the architecture, engineering, and construction (AEC) domain, simulation modeling is mainly used to facilitate decision-making by enabling the assessment of different operational plans and resource arrangements, that are otherwise difficult (if not impossible), expensive, or time consuming to be evaluated in real world settings. The accuracy of such models directly affects their reliability to serve as a basis for important decisions such as project completion time estimation and resource allocation. Compared to other industries, this is particularly important in construction and infrastructure projects due to the high resource costs and the societal impacts of these projects. Discrete event simulation (DES) is a decision making tool that can benefit the process of design, control, and management of construction operations. Despite recent advancements, most DES models used in construction are created during the early planning and design stage when the lack of factual information from the project prohibits the use of realistic data in simulation modeling. The resulting models, therefore, are often built using rigid (subjective) assumptions and design parameters (e.g. precedence logic, activity durations). In all such cases and in the absence of an inclusive methodology to incorporate real field data as the project evolves, modelers rely on information from previous projects (a.k.a. secondary data), expert judgments, and subjective assumptions to generate simulations to predict future performance. These and similar shortcomings have to a large extent limited the use of traditional DES tools to preliminary studies and long-term planning of construction projects. In the realm of the business process management, process mining as a relatively new research domain seeks to automatically discover a process model by observing activity records and extracting information about processes. The research presented in this Ph.D. Dissertation was in part inspired by the prospect of construction process mining using sensory data collected from field agents. This enabled the extraction of operational knowledge necessary to generate and maintain the fidelity of simulation models. A preliminary study was conducted to demonstrate the feasibility and applicability of data-driven knowledge-based simulation modeling with focus on data collection using wireless sensor network (WSN) and rule-based taxonomy of activities. The resulting knowledge-based simulation models performed very well in properly predicting key performance measures of real construction systems. Next, a pervasive mobile data collection and mining technique was adopted and an activity recognition framework for construction equipment and worker tasks was developed. Data was collected using smartphone accelerometers and gyroscopes from construction entities to generate significant statistical time- and frequency-domain features. The extracted features served as the input of different types of machine learning algorithms that were applied to various construction activities. The trained predictive algorithms were then used to extract activity durations and calculate probability distributions to be fused into corresponding DES models. Results indicated that the generated data-driven knowledge-based simulation models outperform static models created based upon engineering assumptions and estimations with regard to compatibility of performance measure outputs to reality.
130

Feature Pruning For Action Recognition In Complex Environment

Nagaraja, Adarsh 01 January 2011 (has links)
A significant number of action recognition research efforts use spatio-temporal interest point detectors for feature extraction. Although the extracted features provide useful information for recognizing actions, a significant number of them contain irrelevant motion and background clutter. In many cases, the extracted features are included as is in the classification pipeline, and sophisticated noise removal techniques are subsequently used to alleviate their effect on classification. We introduce a new action database, created from the Weizmann database, that reveals a significant weakness in systems based on popular cuboid descriptors. Experiments show that introducing complex backgrounds, stationary or dynamic, into the video causes a significant degradation in recognition performance. Moreover, this degradation cannot be fixed by fine-tuning the system or selecting better interest points. Instead, we show that the problem lies at the descriptor level and must be addressed by modifying descriptors.

Page generated in 0.4155 seconds