• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 50
  • 4
  • 3
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 68
  • 68
  • 68
  • 25
  • 25
  • 17
  • 17
  • 11
  • 11
  • 10
  • 10
  • 9
  • 9
  • 9
  • 8
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

Child's play: activity recognition for monitoring children's developmental progress with augmented toys

Westeyn, Tracy Lee 20 May 2010 (has links)
The way in which infants play with objects can be indicative of their developmental progress and may serve as an early indicator for developmental delays. However, the observation of children interacting with toys for the purpose of quantitative analysis can be a difficult task. To better quantify how play may serve as an early indicator, researchers have conducted retrospective studies examining the differences in object play behaviors among infants. However, such studies require that researchers repeatedly inspect videos of play often at speeds much slower than real-time to indicate points of interest. The research presented in this dissertation examines whether a combination of sensors embedded within toys and automatic pattern recognition of object play behaviors can help expedite this process. For my dissertation, I developed the Child'sPlay system which uses augmented toys and statistical models to automatically provide quantitative measures of object play interactions, as well as, provide the PlayView interface to view annotated play data for later analysis. In this dissertation, I examine the hypothesis that sensors embedded in objects can provide sufficient data for automatic recognition of certain exploratory, relational, and functional object play behaviors in semi-naturalistic environments and that a continuum of recognition accuracy exists which allows automatic indexing to be useful for retrospective review. I designed several augmented toys and used them to collect object play data from more than fifty play sessions. I conducted pattern recognition experiments over this data to produce statistical models that automatically classify children's object play behaviors. In addition, I conducted a user study with twenty participants to determine if annotations automatically generated from these models help improve performance in retrospective review tasks. My results indicate that these statistical models increase user performance and decrease perceived effort when combined with the PlayView interface during retrospective review. The presence of high quality annotations are preferred by users and promotes an increase in the effective retrieval rates of object play behaviors.
42

Energy Efficient Context-Aware Framework in Mobile Sensing

Yurur, Ozgur 01 January 2013 (has links)
The ever-increasing technological advances in embedded systems engineering, together with the proliferation of small-size sensor design and deployment, have enabled mobile devices (e.g., smartphones) to recognize daily occurring human based actions, activities and interactions. Therefore, inferring a vast variety of mobile device user based activities from a very diverse context obtained by a series of sensory observations has drawn much interest in the research area of ubiquitous sensing. The existence and awareness of the context provides the capability of being conscious of physical environments or situations around mobile device users, and this allows network services to respond proactively and intelligently based on such awareness. Hence, with the evolution of smartphones, software developers are empowered to create context aware applications for recognizing human-centric or community based innovative social and cognitive activities in any situation and from anywhere. This leads to the exciting vision of forming a society of ``Internet of Things" which facilitates applications to encourage users to collect, analyze and share local sensory knowledge in the purpose for a large scale community use by creating a smart network which is capable of making autonomous logical decisions to actuate environmental objects. More significantly, it is believed that introducing the intelligence and situational awareness into recognition process of human-centric event patterns could give a better understanding of human behaviors, and it also could give a chance for proactively assisting individuals in order to enhance the quality of lives. Mobile devices supporting emerging computationally pervasive applications will constitute a significant part of future mobile technologies by providing highly proactive services requiring continuous monitoring of user related contexts. However, the middleware services provided in mobile devices have limited resources in terms of power, memory and bandwidth as compared to the capabilities of PCs and servers. Above all, power concerns are major restrictions standing up to implementation of context-aware applications. These requirements unfortunately shorten device battery lifetimes due to high energy consumption caused by both sensor and processor operations. Specifically, continuously capturing user context through sensors imposes heavy workloads in hardware and computations, and hence drains the battery power rapidly. Therefore, mobile device batteries do not last a long time while operating sensor(s) constantly. In addition to that, the growing deployment of sensor technologies in mobile devices and innumerable software applications utilizing sensors have led to the creation of a layered system architecture (i.e., context aware middleware) so that the desired architecture can not only offer a wide range of user-specific services, but also respond effectively towards diversity in sensor utilization, large sensory data acquisitions, ever-increasing application requirements, pervasive context processing software libraries, mobile device based constraints and so on. Due to the ubiquity of these computing devices in a dynamic environment where the sensor network topologies actively change, it yields applications to behave opportunistically and adaptively without a priori assumptions in response to the availability of diverse resources in the physical world as well as in response to scalability, modularity, extensibility and interoperability among heterogeneous physical hardware. In this sense, this dissertation aims at proposing novel solutions to enhance the existing tradeoffs in mobile sensing between accuracy and power consumption while context is being inferred under the intrinsic constraints of mobile devices and around the emerging concepts in context-aware middleware framework.
43

A learning-based computer vision approach for the inference of articulated motion = Ein lernbasierter computer-vision-ansatz für die erkennung artikulierter bewegung /

Curio, Cristóbal. January 1900 (has links)
Dissertation--Ruhr-Universität, Bochum, 2004. / Includes bibliographical references (p. 179-187).
44

Sistema embarcado empregado no reconhecimento de atividades humanas /

Ferreira, Willian de Assis Pedrobon January 2017 (has links)
Orientador: Alexandre César Rodrigues da Silva / Resumo: A utilização de sensores em ambientes inteligentes é fundamental para supervisionar as atividades dos seres humanos. No reconhecimento de atividades humanas, ou HAR (Human Activity Recognition), técnicas de supervisionamento são aplicadas para identificar as atividades realizadas em diversas aplicações, como no esporte e no acompanhamento de pessoas com necessidades especiais. O Sistema de Reconhecimento de Atividades Humanas (SIRAH) é empregado no reconhecimento de atividades humanas, utilizando um acelerômetro localizado na cintura da pessoa monitorada e uma Rede Neural Artificial para classificar sete atividades: em pé, deitado, sentado, caminhar, correr, sentar e levantar. Originalmente implementado no software MATLAB, realizava classificações offline em que os resultados não eram obtidos durante a execução das atividades. Apresenta-se, neste trabalho, o desenvolvimento de duas versões embarcadas do SIRAH, que executam o algoritmo de classificação durante a prática das atividades monitoradas. A primeira implementação foi efetuada no processador Nios II da Altera, que ofereceu a mesma exatidão do sistema offline com processamento limitado, pois o software consome 673 milissegundos para executar a classificação desejada. Para aprimorar o desempenho, outra versão foi implementada em FPGA utilizando a linguagem de descrição de hardware VHDL. O algoritmo de classificação opera em tempo real e é executado em apenas 236 microssegundos, garantindo total amostragem das acelerações... (Resumo completo, clicar acesso eletrônico abaixo) / Mestre
45

Modèles profonds de régression et applications à la vision par ordinateur pour l'interaction homme-robot / Deep Regression Models and Computer Vision Applications for Multiperson Human-Robot Interaction

Lathuiliere, Stéphane 22 May 2018 (has links)
Dans le but d’interagir avec des êtres humains, les robots doivent effectuer destâches de perception basique telles que la détection de visage, l’estimation dela pose des personnes ou la reconnaissance de la parole. Cependant, pour interagir naturellement, avec les hommes, le robot doit modéliser des conceptsde haut niveau tels que les tours de paroles dans un dialogue, le centre d’intérêtd’une conversion, ou les interactions entre les participants. Dans ce manuscrit,nous suivons une approche ascendante (dite “top-down”). D’une part, nousprésentons deux méthodes de haut niveau qui modélisent les comportementscollectifs. Ainsi, nous proposons un modèle capable de reconnatre les activitésqui sont effectuées par différents des groupes de personnes conjointement, telsque faire la queue, discuter. Notre approche gère le cas général où plusieursactivités peuvent se dérouler simultanément et en séquence. D’autre part,nous introduisons une nouvelle approche d’apprentissage par renforcement deréseau de neurones pour le contrôle de la direction du regard du robot. Notreapproche permet à un robot d’apprendre et d’adapter sa stratégie de contrôledu regard dans le contexte de l’interaction homme-robot. Le robot est ainsicapable d’apprendre à concentrer son attention sur des groupes de personnesen utilisant seulement ses propres expériences (sans supervision extérieur).Dans un deuxième temps, nous étudions en détail les approchesd’apprentissage profond pour les problèmes de régression. Les problèmesde régression sont cruciaux dans le contexte de l’interaction homme-robotafin d’obtenir des informations fiables sur les poses de la tête et du corpsdes personnes faisant face au robot. Par conséquent, ces contributions sontvraiment générales et peuvent être appliquées dans de nombreux contextesdifférents. Dans un premier temps, nous proposons de coupler un mélangegaussien de régressions inverses linéaires avec un réseau de neurones convolutionnels. Deuxièmement, nous introduisons un modèle de mélange gaussien-uniforme afin de rendre l’algorithme d’apprentissage plus robuste aux annotations bruitées. Enfin, nous effectuons une étude à grande échelle pour mesurerl’impact de plusieurs choix d’architecture et extraire des recommandationspratiques lors de l’utilisation d’approches d’apprentissage profond dans destâches de régression. Pour chacune de ces contributions, une intense validation expérimentale a été effectuée avec des expériences en temps réel sur lerobot NAO ou sur de larges et divers ensembles de données. / In order to interact with humans, robots need to perform basic perception taskssuch as face detection, human pose estimation or speech recognition. However, in order have a natural interaction with humans, the robot needs to modelhigh level concepts such as speech turns, focus of attention or interactions between participants in a conversation. In this manuscript, we follow a top-downapproach. On the one hand, we present two high-level methods that model collective human behaviors. We propose a model able to recognize activities thatare performed by different groups of people jointly, such as queueing, talking.Our approach handles the general case where several group activities can occur simultaneously and in sequence. On the other hand, we introduce a novelneural network-based reinforcement learning approach for robot gaze control.Our approach enables a robot to learn and adapt its gaze control strategy inthe context of human-robot interaction. The robot is able to learn to focus itsattention on groups of people from its own audio-visual experiences.Second, we study in detail deep learning approaches for regression prob-lems. Regression problems are crucial in the context of human-robot interaction in order to obtain reliable information about head and body poses or theage of the persons facing the robot. Consequently, these contributions are really general and can be applied in many different contexts. First, we proposeto couple a Gaussian mixture of linear inverse regressions with a convolutionalneural network. Second, we introduce a Gaussian-uniform mixture model inorder to make the training algorithm more robust to noisy annotations. Finally,we perform a large-scale study to measure the impact of several architecturechoices and extract practical recommendations when using deep learning approaches in regression tasks. For each of these contributions, a strong experimental validation has been performed with real-time experiments on the NAOrobot or on large and diverse data-sets.
46

Sistema embarcado empregado no reconhecimento de atividades humanas / Embedded system applied in human activities recognition

Ferreira, Willian de Assis Pedrobon [UNESP] 24 August 2017 (has links)
Submitted by Willian de Assis Pedrobon Ferreira null (willianferreira51@gmail.com) on 2017-09-27T13:44:04Z No. of bitstreams: 1 dissertacao_Willian_de_Assis_Pedrobon_Ferreira.pdf: 8549439 bytes, checksum: 8a499577dddc476a2f1f7b3cb4d9a873 (MD5) / Approved for entry into archive by Monique Sasaki (sayumi_sasaki@hotmail.com) on 2017-09-28T14:15:16Z (GMT) No. of bitstreams: 1 ferreira_wap_me_ilha.pdf: 8549439 bytes, checksum: 8a499577dddc476a2f1f7b3cb4d9a873 (MD5) / Made available in DSpace on 2017-09-28T14:15:16Z (GMT). No. of bitstreams: 1 ferreira_wap_me_ilha.pdf: 8549439 bytes, checksum: 8a499577dddc476a2f1f7b3cb4d9a873 (MD5) Previous issue date: 2017-08-24 / Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES) / A utilização de sensores em ambientes inteligentes é fundamental para supervisionar as atividades dos seres humanos. No reconhecimento de atividades humanas, ou HAR (Human Activity Recognition), técnicas de supervisionamento são aplicadas para identificar as atividades realizadas em diversas aplicações, como no esporte e no acompanhamento de pessoas com necessidades especiais. O Sistema de Reconhecimento de Atividades Humanas (SIRAH) é empregado no reconhecimento de atividades humanas, utilizando um acelerômetro localizado na cintura da pessoa monitorada e uma Rede Neural Artificial para classificar sete atividades: em pé, deitado, sentado, caminhar, correr, sentar e levantar. Originalmente implementado no software MATLAB, realizava classificações offline em que os resultados não eram obtidos durante a execução das atividades. Apresenta-se, neste trabalho, o desenvolvimento de duas versões embarcadas do SIRAH, que executam o algoritmo de classificação durante a prática das atividades monitoradas. A primeira implementação foi efetuada no processador Nios II da Altera, que ofereceu a mesma exatidão do sistema offline com processamento limitado, pois o software consome 673 milissegundos para executar a classificação desejada. Para aprimorar o desempenho, outra versão foi implementada em FPGA utilizando a linguagem de descrição de hardware VHDL. O algoritmo de classificação opera em tempo real e é executado em apenas 236 microssegundos, garantindo total amostragem das acelerações. / The use of sensors in smart environments is fundamental to monitor humans activities. In Human Activity Recognation (HAR), supervision techniques are employed to identify activities in several areas, such as in sport pratice and in people monitoring with special needs. The Sistema de Reconhecimento de Atividades Humanas (SIRAH) is used in human activities recognation, using an accelerometer located on the monitored person waist and an Artificial Neural Network to classify seven activities: standing, lying, seated, walking, running, sitting and standing. Originally, performed offline classifications executed in MATLAB software. In this work we present the development of two embedded SIRAH versions, which perform the classification algorithm during the monitored activities practice. The first implementation was performed on Altera’s Nios II processor, that has been provided the same offline system accuracy, but with limited processing. To improve the performance, the other version was implemented in FPGA using the VHDL hardware description language, which performs real-time classifications, ensuring a lossless acceleration sampling.
47

Geometry Aware Compressive Analysis of Human Activities : Application in a Smart Phone Platform

January 2014 (has links)
abstract: Continuous monitoring of sensor data from smart phones to identify human activities and gestures, puts a heavy load on the smart phone's power consumption. In this research study, the non-Euclidean geometry of the rich sensor data obtained from the user's smart phone is utilized to perform compressive analysis and efficient classification of human activities by employing machine learning techniques. We are interested in the generalization of classical tools for signal approximation to newer spaces, such as rotation data, which is best studied in a non-Euclidean setting, and its application to activity analysis. Attributing to the non-linear nature of the rotation data space, which involve a heavy overload on the smart phone's processor and memory as opposed to feature extraction on the Euclidean space, indexing and compaction of the acquired sensor data is performed prior to feature extraction, to reduce CPU overhead and thereby increase the lifetime of the battery with a little loss in recognition accuracy of the activities. The sensor data represented as unit quaternions, is a more intrinsic representation of the orientation of smart phone compared to Euler angles (which suffers from Gimbal lock problem) or the computationally intensive rotation matrices. Classification algorithms are employed to classify these manifold sequences in the non-Euclidean space. By performing customized indexing (using K-means algorithm) of the evolved manifold sequences before feature extraction, considerable energy savings is achieved in terms of smart phone's battery life. / Dissertation/Thesis / M.S. Electrical Engineering 2014
48

Representação simbólica de séries temporais para reconhecimento de atividades humanas no smartphone / Symbolic representation of time series for human activity recognition using smartphone

Quispe, Kevin Gustavo Montero, 092981721829, https://orcid.org/0000-0002-0550-4748 14 August 2018 (has links)
Submitted by Kevin Quispe (kgmq@icomp.ufam.edu.br) on 2018-10-26T19:02:31Z No. of bitstreams: 1 dissertação-kevin-quispe-final.pdf: 2744401 bytes, checksum: cf4d3337afb0d9fa244abbd4ec3d1a5a (MD5) / Approved for entry into archive by Secretaria PPGI (secretariappgi@icomp.ufam.edu.br) on 2018-10-26T19:07:43Z (GMT) No. of bitstreams: 1 dissertação-kevin-quispe-final.pdf: 2744401 bytes, checksum: cf4d3337afb0d9fa244abbd4ec3d1a5a (MD5) / Approved for entry into archive by Divisão de Documentação/BC Biblioteca Central (ddbc@ufam.edu.br) on 2018-10-26T19:15:25Z (GMT) No. of bitstreams: 1 dissertação-kevin-quispe-final.pdf: 2744401 bytes, checksum: cf4d3337afb0d9fa244abbd4ec3d1a5a (MD5) / Made available in DSpace on 2018-10-26T19:15:25Z (GMT). No. of bitstreams: 1 dissertação-kevin-quispe-final.pdf: 2744401 bytes, checksum: cf4d3337afb0d9fa244abbd4ec3d1a5a (MD5) Previous issue date: 2018-08-14 / Human activity recognition (RAH) through sensors embedded in wearable devices such as smartphones has allowed the development of solutions capable of monitoring human behavior. However, such solutions have presented limitations in terms of efficiency in the consumption of computational resources and generalization for different application or data domain configurations. These limitations are explored in this work in the feature extraction process, in which existing solutions use a manual approach to extract the characteristics of the sensor data. To overcome the problem, this work presents an automatic approach to feature extraction based on the symbolic representation of time series --- representation defined by sets of discrete symbols (words). In this context, this work presents an extension of the symbolic representation of the Bag-Of-SFA-Symbols (BOSS) method to handle the processing of multiple time series, reduce data dimensionality and generate compact and efficient classification models. The proposed method, called Multivariate Bag-Of-SFA-Symbols (MBOSS), is evaluated for the classification of physical activities from data of inertial sensors. Experiments are conducted in three public databases and for different experimental configurations. In addition, the efficiency of the method is evaluated in aspects such as computing time and data space. The results, in general, show an efficiency of classification equivalent to the solutions based on the traditional approach of manual extraction, highlighting the results obtained in the database with nine classes of activities (UniMib SHAR), where MBOSS obtained an accuracy of 99% and 87% for the custom and generalized template, respectively. The efficiency results of MBOSS demonstrate the low computational cost of the solution and show the feasibility of application in smartphones. / O reconhecimento de atividade humanas (RAH) por meio de sensores embutidos em dispositivos vestíveis como, por exemplo, smartphones tem permitido o desenvolvimento de soluções capazes de monitorar o comportamento humano. No entanto, tais soluções têm apresentado limitações em termos de eficiência no consumo dos recursos computacionais e na generalização para diferentes configurações de aplicação ou domínio de dados. Essas limitações são exploradas neste trabalho no processo de extração de características, na qual as soluções existentes utilizam uma abordagem manual para extrair as características dos dados de sensores. Para superar o problema, este trabalho apresenta uma abordagem automática de extração de características baseada na representação simbólica de séries temporais --- representação definida por conjuntos de símbolos discretos (palavras). Nesse contexto, este trabalho apresenta uma extensão do método de representação simbólica Bag-Of-SFA-Symbols (BOSS) para lidar com o processamento de múltiplas séries temporais, reduzir a dimensionalidade dos dados e gerar modelos de classificação compactos e eficiêntes. O método proposto, denominado Multivariate Bag-Of-SFA-Symbols (MBOSS), é avaliado para a classificação de atividades físicas a partir de dados de sensores inerciais. Experimentos são conduzidos em três bases de dados públicas e para diferentes configurações experimentais. Além disso, avalia-se a eficiência do método em aspectos como tempo de computação e espaço de dados. Os resultados, em geral, demostram uma eficácia de classificação equivalente as soluções baseadas na abordagem comun de extração manual de características, destacando os resultados obtidos na base de dados com nove classes de atividades (UniMib SHAR), onde o MBOSS obteve uma acurácia de 99% e 87% para o modelo personalizado e generalizado, respectivamente. Os resultados de eficiência do MBOSS demostram o baixo custo computacional da solução e mostram a viabilidade de aplicação em smartphones.
49

Design, Optimization, and Applications of Wearable IoT Devices

January 2020 (has links)
abstract: Movement disorders are becoming one of the leading causes of functional disability due to aging populations and extended life expectancy. Diagnosis, treatment, and rehabilitation currently depend on the behavior observed in a clinical environment. After the patient leaves the clinic, there is no standard approach to continuously monitor the patient and report potential problems. Furthermore, self-recording is inconvenient and unreliable. To address these challenges, wearable health monitoring is emerging as an effective way to augment clinical care for movement disorders. Wearable devices are being used in many health, fitness, and activity monitoring applications. However, their widespread adoption has been hindered by several adaptation and technical challenges. First, conventional rigid devices are uncomfortable to wear for long periods. Second, wearable devices must operate under very low-energy budgets due to their small battery capacities. Small batteries create a need for frequent recharging, which in turn leads users to stop using them. Third, the usefulness of wearable devices must be demonstrated through high impact applications such that users can get value out of them. This dissertation presents solutions to solving the challenges faced by wearable devices. First, it presents an open-source hardware/software platform for wearable health monitoring. The proposed platform uses flexible hybrid electronics to enable devices that conform to the shape of the user’s body. Second, it proposes an algorithm to enable recharge-free operation of wearable devices that harvest energy from the environment. The proposed solution maximizes the performance of the wearable device under minimum energy constraints. The results of the proposed algorithm are, on average, within 3% of the optimal solution computed offline. Third, a comprehensive framework for human activity recognition (HAR), one of the first steps towards a solution for movement disorders is presented. It starts with an online learning framework for HAR. Experiments on a low power IoT device (TI-CC2650 MCU) with twenty-two users show 95% accuracy in identifying seven activities and their transitions with less than 12.5 mW power consumption. The online learning framework is accompanied by a transfer learning approach for HAR that determines the number of neural network layers to transfer among uses to enable efficient online learning. Next, a technique to co-optimize the accuracy and active time of wearable applications by utilizing multiple design points with different energy-accuracy trade-offs is presented. The proposed technique switches between the design points at runtime to maximize a generalized objective function under tight harvested energy budget constraints. Finally, we present the first ultra-low-energy hardware accelerator that makes it practical to perform HAR on energy harvested from wearable devices. The accelerator consumes 22.4 microjoules per operation using a commercial 65 nm technology. In summary, the solutions presented in this dissertation can enable the wider adoption of wearable devices. / Dissertation/Thesis / Human activity recognition dataset / Doctoral Dissertation Computer Engineering 2020
50

A COMPARATIVE STUDY OF DEEP-LEARNING APPROACHES FOR ACTIVITY RECOGNITION USING SENSOR DATA IN SMART OFFICE ENVIRONMENTS

Johansson, Alexander, Sandberg, Oscar January 2018 (has links)
Syftet med studien är att jämföra tre deep learning nätverk med varandra för att ta reda på vilket nätverk som kan producera den högsta uppmätta noggrannheten. Noggrannheten mäts genom att nätverken försöker förutspå antalet personer som vistas i rummet där observation äger rum. Utöver att jämföra de tre djupinlärningsnätverk med varandra, kommer vi även att jämföra dem med en traditionell metoder inom maskininlärning - i syfte för att ta reda på ifall djupinlärningsnätverken presterar bättre än vad traditionella metoder gör. I studien används design and creation. Design and creation är en forskningsmetodologi som lägger stor fokus på att utveckla en IT produkt och använda produkten som dess bidrag till ny kunskap. Metodologin har fem olika faser, vi valde att göra en iterativ process mellan utveckling- och utvärderingfaserna. Observation är den datagenereringsmetod som används i studien för att samla in data. Datagenereringen pågick under tre veckor och under tiden hann 31287 rader data registreras i vår databas. Ett av våra nätverk fick vi en noggrannhet på 78.2%, de andra två nätverken fick en noggrannhet på 45.6% respektive 40.3%. För våra traditionella metoder använde vi ett beslutsträd med två olika formler, de producerade en noggrannhet på 61.3% respektive 57.2%. Resultatet av denna studie visar på att utav de tre djupinlärningsnätverken kan endast en av djupinlärningsnätverken producera en högre noggrannhet än de traditionella maskininlärningsmetoderna. Detta resultatet betyder nödvändigtvis inte att djupinlärningsnätverk i allmänhet kan producera en högre noggrannhet än traditionella maskininlärningsmetoder. Ytterligare arbete som kan göras är följande: ytterligare experiment med datasetet och hyperparameter av djupinlärningsnätverken, samla in mer data och korrekt validera denna data och jämföra fler djupinlärningsnätverk och maskininlärningsmetoder. / The purpose of the study is to compare three deep learning networks with each other to evaluate which network can produce the highest prediction accuracy. Accuracy is measured as the networks try to predict the number of people in the room where observation takes place. In addition to comparing the three deep learning networks with each other, we also compare the networks with a traditional machine learning approach - in order to find out if deep learning methods perform better than traditional methods do. This study uses design and creation. Design and creation is a methodology that places great emphasis on developing an IT product and uses the product as its contribution to new knowledge. The methodology has five different phases; we choose to make an iterative process between the development and evaluation phases. Observation is the data generation method used to collect data. Data generation lasted for three weeks, resulting in 31287 rows of data recorded in our database. One of our deep learning networks produced an accuracy of 78.2% meanwhile, the two other approaches produced an accuracy of 45.6% and 40.3% respectively. For our traditional method decision trees were used, we used two different formulas and they produced an accuracy of 61.3% and 57.2% respectively. The result of this thesis shows that out of the three deep learning networks included in this study, only one deep learning network is able to produce a higher predictive accuracy than the traditional ML approaches. This result does not necessarily mean that deep learning approaches in general, are able to produce a higher predictive accuracy than traditional machine learning approaches. Further work that can be made is the following: further experimentation with the dataset and hyperparameters, gather more data and properly validate this data and compare more and other deep learning and machine learning approaches.

Page generated in 0.1077 seconds