During the COVID-19 pandemic the questions of the efficiency around meetings has been in the forefront of some discussion inside companies. One way to measure efficiency is to measure the interactivity between different participants. In order to measure this the participants need to be identified. With the recent spike of Machine learning advancements, is this something that can be done using facial and voice recognition? Another field that has risen to the top is cloud computing. Can machine learning and cloud computing be used to evaluate and monitor a meeting, thus handling both audio and video streams in a real time environment? The conclusion of this thesis is that Artificial Intelligence(AI) can be used to monitor a meeting. To be able to do so Amazon Web Service (AWS) can be utilized. The choice of using a DeepLens was however not best choice. A hardware like DeepLens is required, but with better integration with cloud computing, as well with more freedom regarding the usage of several models for handling both feeds. With the usage of other models to automatic annotate data the time needed for training a new model can be reduced. The data generated during a single meeting is enough with the help of transfer learning from Amazon web service to build a model for facial identification and detection.
Identifer | oai:union.ndltd.org:UPSALLA1/oai:DiVA.org:ltu-81105 |
Date | January 2020 |
Creators | Hansson, Andreas |
Publisher | Luleå tekniska universitet, Institutionen för system- och rymdteknik |
Source Sets | DiVA Archive at Upsalla University |
Language | English |
Detected Language | English |
Type | Student thesis, info:eu-repo/semantics/bachelorThesis, text |
Format | application/pdf |
Rights | info:eu-repo/semantics/openAccess |
Page generated in 0.0022 seconds