• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 5
  • 1
  • 1
  • Tagged with
  • 9
  • 9
  • 5
  • 5
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Wi-Fi Sensing: Device-Free In-Zone Object Movement Detection

Schnorr, Nicholas P 01 December 2021 (has links) (PDF)
Wi-Fi Sensing is becoming a prominent field with a wide range of potential applications. Using existing hardware on a wireless network such as access points, cell phones, and smart home devices, important information can be inferred about the current physical environment. Through the analysis of Channel State Information collected in the Neighborhood Discovery Protocol process, the wireless network can detect disturbances in Wi-Fi signals when the physical environment changes. This results in a system that can sense motion within the Wi-Fi network, allowing for movement detection without any wearable devices. The goal of this thesis is to answer whether Wi-Fi Sensing can enable useful applications at the enterprise level. The main applications we will focus on are presence detection and in-zone movement detection. Our contributions include: 1. A scalable, statistical analysis system that generates a heatmap and detects movement in a 12 x 9 meter zone with 98 percent accuracy, as well as a 6 x 9 meter zone with 88 percent accuracy. 2. A broad dataset collected for evaluation in an enterprise setting. 3. An end-to-end CSI data visualization and analysis application.
2

Corridor Navigation for Monocular Vision Mobile Robots

Ng, Matthew James 01 June 2018 (has links)
Monocular vision robots use a single camera to process information about its environment. By analyzing this scene, the robot can determine the best navigation direction. Many modern approaches to robot hallway navigation involve using a plethora of sensors to detect certain features in the environment. This can be laser range finders, inertial measurement units, motor encoders, and cameras. By combining all these sensors, there is unused data which could be useful for navigation. To draw back and develop a baseline approach, this thesis explores the reliability and capability of solely using a camera for navigation. The basic navigation structure begins by taking frames from the camera and breaking them down to find the most prominent lines. The location where these lines intersect determine the forward direction to drive the robot. To improve the accuracy of navigation, algorithm improvements and additional features from the camera frames are used. This includes line intersection weighting to reduce noise from extraneous lines, floor segmentation to improve rotational stability, and person detection.
3

Aktuelle Methoden der Background Subtraction und deren Anwendung als Vorverarbeitung einer Gestürzten-Personen-Erkennung

Brose, Jan 03 June 2022 (has links)
Das Thema dieser Arbeit ist die Entwicklung einer Background Subtraction und deren Verwendung in einer Gestürzten-Personen-Erkennung im Kontext eines Roboter Nachtwächters in einer Pflegeeinrichtung. Dazu wird der aktuelle technische Stand bei der Background Subtraction betrachtet. Im Anschluss daran wird basierend auf der Recherche und den Rahmenbedingungen die durch das Einsatzszenario gegeben sind ein Ansatz gewählt und umgesetzt. / The topic of this thesis is the development of a background subtraction and its use in a fallen person detection in the context of a robot night watchman in a care facility. For this purpose, the current technical status of background subtraction is considered. Subsequently, an approach is selected and implemented based on the research and the conditions given by the application scenario.
4

PERSON RE-IDENTIFICATION USING RGB-DEPTH CAMERAS

Oliver Moll, Javier 29 December 2015 (has links)
[EN] The presence of surveillance systems in our lives has drastically increased during the last years. Camera networks can be seen in almost every crowded public and private place, which generate huge amount of data with valuable information. The automatic analysis of data plays an important role to extract relevant information from the scene. In particular, the problem of person re-identification is a prominent topic that has become of great interest, specially for the fields of security or marketing. However, there are some factors, such as changes in the illumination conditions, variations in the person pose, occlusions or the presence of outliers that make this topic really challenging. Fortunately, the recent introduction of new technologies such as depth cameras opens new paradigms in the image processing field and brings new possibilities. This Thesis proposes a new complete framework to tackle the problem of person re-identification using commercial rgb-depth cameras. This work includes the analysis and evaluation of new approaches for the modules of segmentation, tracking, description and matching. To evaluate our contributions, a public dataset for person re-identification using rgb-depth cameras has been created. Rgb-depth cameras provide accurate 3D point clouds with color information. Based on the analysis of the depth information, an novel algorithm for person segmentation is proposed and evaluated. This method accurately segments any person in the scene, and naturally copes with occlusions and connected people. The segmentation mask of a person generates a 3D person cloud, which can be easily tracked over time based on proximity. The accumulation of all the person point clouds over time generates a set of high dimensional color features, named raw features, that provides useful information about the person appearance. In this Thesis, we propose a family of methods to extract relevant information from the raw features in different ways. The first approach compacts the raw features into a single color vector, named Bodyprint, that provides a good generalisation of the person appearance over time. Second, we introduce the concept of 3D Bodyprint, which is an extension of the Bodyprint descriptor that includes the angular distribution of the color features. Third, we characterise the person appearance as a bag of color features that are independently generated over time. This descriptor receives the name of Bag of Appearances because its similarity with the concept of Bag of Words. Finally, we use different probabilistic latent variable models to reduce the feature vectors from a statistical perspective. The evaluation of the methods demonstrates that our proposals outperform the state of the art. / [ES] La presencia de sistemas de vigilancia se ha incrementado notablemente en los últimos anños. Las redes de videovigilancia pueden verse en casi cualquier espacio público y privado concurrido, lo cual genera una gran cantidad de datos de gran valor. El análisis automático de la información juega un papel importante a la hora de extraer información relevante de la escena. En concreto, la re-identificación de personas es un campo que ha alcanzado gran interés durante los últimos años, especialmente en seguridad y marketing. Sin embargo, existen ciertos factores, como variaciones en las condiciones de iluminación, variaciones en la pose de la persona, oclusiones o la presencia de artefactos que hacen de este campo un reto. Afortunadamente, la introducción de nuevas tecnologías como las cámaras de profundidad plantea nuevos paradigmas en la visión artificial y abre nuevas posibilidades. En esta Tesis se propone un marco completo para abordar el problema de re-identificación utilizando cámaras rgb-profundidad. Este trabajo incluye el análisis y evaluación de nuevos métodos de segmentación, seguimiento, descripción y emparejado de personas. Con el fin de evaluar las contribuciones, se ha creado una base de datos pública para re-identificación de personas usando estas cámaras. Las cámaras rgb-profundidad proporcionan nubes de puntos 3D con información de color. A partir de la información de profundidad, se propone y evalúa un nuevo algoritmo de segmentación de personas. Este método segmenta de forma precisa cualquier persona en la escena y resuelve de forma natural problemas de oclusiones y personas conectadas. La máscara de segmentación de una persona genera una nube de puntos 3D que puede ser fácilmente seguida a lo largo del tiempo. La acumulación de todas las nubes de puntos de una persona a lo largo del tiempo genera un conjunto de características de color de grandes dimensiones, denominadas características base, que proporcionan información útil de la apariencia de la persona. En esta Tesis se propone una familia de métodos para extraer información relevante de las características base. La primera propuesta compacta las características base en un vector único de color, denominado Bodyprint, que proporciona una buena generalización de la apariencia de la persona a lo largo del tiempo. En segundo lugar, se introducen los Bodyprints 3D, definidos como una extensión de los Bodyprints que incluyen información angular de las características de color. En tercer lugar, la apariencia de la persona se caracteriza mediante grupos de características de color que se generan independientemente a lo largo del tiempo. Este descriptor recibe el nombre de Grupos de Apariencias debido a su similitud con el concepto de Grupos de Palabras. Finalmente, se proponen diferentes modelos probabilísticos de variables latentes para reducir los vectores de características desde un punto de vista estadístico. La evaluación de los métodos demuestra que nuestras propuestas superan los métodos del estado del arte. / [CAT] La presència de sistemes de vigilància s'ha incrementat notòriament en els últims anys. Les xarxes de videovigilància poden veure's en quasi qualsevol espai públic i privat concorregut, la qual cosa genera una gran quantitat de dades de gran valor. L'anàlisi automàtic de la informació pren un paper important a l'hora d'extraure informació rellevant de l'escena. En particular, la re-identificaciò de persones és un camp que ha aconseguit gran interès durant els últims anys, especialment en seguretat i màrqueting. No obstant, hi ha certs factors, com variacions en les condicions d'il.luminació, variacions en la postura de la persona, oclusions o la presència d'artefactes que fan d'aquest camp un repte. Afortunadament, la introducció de noves tecnologies com les càmeres de profunditat, planteja nous paradigmes en la visió artificial i obri noves possibilitats. En aquesta Tesi es proposa un marc complet per abordar el problema de la re-identificació mitjançant càmeres rgb-profunditat. Aquest treball inclou l'anàlisi i avaluació de nous mètodes de segmentació, seguiment, descripció i emparellat de persones. Per tal d'avaluar les contribucions, s'ha creat una base de dades pública per re-identificació de persones emprant aquestes càmeres. Les càmeres rgb-profunditat proporcionen núvols de punts 3D amb informació de color. A partir de la informació de profunditat, es defineix i s'avalua un nou algorisme de segmentació de persones. Aquest mètode segmenta de forma precisa qualsevol persona en l'escena i resol de forma natural problemes d'oclusions i persones connectades. La màscara de segmentació d'una persona genera un núvol de punts 3D que pot ser fàcilment seguida al llarg del temps. L'acumulació de tots els núvols de punts d'una persona al llarg del temps genera un conjunt de característiques de color de grans dimensions, anomenades característiques base, que hi proporcionen informació útil de l'aparença de la persona. En aquesta Tesi es proposen una família de mètodes per extraure informació rellevant de les característiques base. La primera proposta compacta les característiques base en un vector únic de color, anomenat Bodyprint, que proporciona una bona generalització de l'aparença de la persona al llarg del temps. En segon lloc, s'introdueixen els Bodyprints 3D, definits com una extensió dels Bodyprints que inclouen informació angular de les característiques de color. En tercer lloc, l'aparença de la persona es caracteritza amb grups de característiques de color que es generen independentment a llarg del temps. Aquest descriptor reb el nom de Grups d'Aparences a causa de la seua similitud amb el concepte de Grups de Paraules. Finalment, es proposen diferents models probabilístics de variables latents per reduir els vectors de característiques des d'un punt de vista estadístic. L'avaluació dels mètodes demostra que les propostes presentades superen als mètodes de l'estat de l'art. / Oliver Moll, J. (2015). PERSON RE-IDENTIFICATION USING RGB-DEPTH CAMERAS [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/59227 / TESIS
5

Analyse des personnes dans les films stéréoscopiques / Person analysis in stereoscopic movies

Seguin, Guillaume 29 April 2016 (has links)
Les humains sont au coeur de nombreux problèmes de vision par ordinateur, tels que les systèmes de surveillance ou les voitures sans pilote. Ils sont également au centre de la plupart des contenus visuels, pouvant amener à des jeux de données très larges pour l’entraînement de modèles et d’algorithmes. Par ailleurs, si les données stéréoscopiques font l’objet d’études depuis longtemps, ce n’est que récemment que les films 3D sont devenus un succès commercial. Dans cette thèse, nous étudions comment exploiter les données additionnelles issues des films 3D pour les tâches d’analyse des personnes. Nous explorons tout d’abord comment extraire une notion de profondeur à partir des films stéréoscopiques, sous la forme de cartes de disparité. Nous évaluons ensuite à quel point les méthodes de détection de personne et d’estimation de posture peuvent bénéficier de ces informations supplémentaires. En s’appuyant sur la relative facilité de la tâche de détection de personne dans les films 3D, nous développons une méthode pour collecter automatiquement des exemples de personnes dans les films 3D afin d’entraîner un détecteur de personne pour les films non 3D. Nous nous concentrons ensuite sur la segmentation de plusieurs personnes dans les vidéos. Nous proposons tout d’abord une méthode pour segmenter plusieurs personnes dans les films 3D en combinant des informations dérivées des cartes de profondeur avec des informations dérivées d’estimations de posture. Nous formulons ce problème comme un problème d’étiquetage de graphe multi-étiquettes, et notre méthode intègre un modèle des occlusions pour produire une segmentation multi-instance par plan. Après avoir montré l’efficacité et les limitations de cette méthode, nous proposons un second modèle, qui ne repose lui que sur des détections de personne à travers la vidéo, et pas sur des estimations de posture. Nous formulons ce problème comme la minimisation d’un coût quadratique sous contraintes linéaires. Ces contraintes encodent les informations de localisation fournies par les détections de personne. Cette méthode ne nécessite pas d’information de posture ou des cartes de disparité, mais peut facilement intégrer ces signaux supplémentaires. Elle peut également être utilisée pour d’autres classes d’objets. Nous évaluons tous ces aspects et démontrons la performance de cette nouvelle méthode. / People are at the center of many computer vision tasks, such as surveillance systems or self-driving cars. They are also at the center of most visual contents, potentially providing very large datasets for training models and algorithms. While stereoscopic data has been studied for long, it is only recently that feature-length stereoscopic ("3D") movies became widely available. In this thesis, we study how we can exploit the additional information provided by 3D movies for person analysis. We first explore how to extract a notion of depth from stereo movies in the form of disparity maps. We then evaluate how person detection and human pose estimation methods perform on such data. Leveraging the relative ease of the person detection task in 3D movies, we develop a method to automatically harvest examples of persons in 3D movies and train a person detector for standard color movies. We then focus on the task of segmenting multiple people in videos. We first propose a method to segment multiple people in 3D videos by combining cues derived from pose estimates with ones derived from disparity maps. We formulate the segmentation problem as a multi-label Conditional Random Field problem, and our method integrates an occlusion model to produce a layered, multi-instance segmentation. After showing the effectiveness of this approach as well as its limitations, we propose a second model which only relies on tracks of person detections and not on pose estimates. We formulate our problem as a convex optimization one, with the minimization of a quadratic cost under linear equality or inequality constraints. These constraints weakly encode the localization information provided by person detections. This method does not explicitly require pose estimates or disparity maps but can integrate these additional cues. Our method can also be used for segmenting instances of other object classes from videos. We evaluate all these aspects and demonstrate the superior performance of this new method.
6

Building a low-cost IoT sensor system that recognizes behavioral patterns for collaborative learning - A Proof of Concept

Sundblad, Graziella January 2021 (has links)
Since the advent of the Internet, we have been observing a fast-paced development within the computing world. One of the major innovations in recent years is the “Internet of Things”, which brings interconnectedness between devices and humans to unprecedented heights. This technological breakthrough enabled the emergence of a new sub-field within Learning Analytics, Multimodal Learning Analytics, which makes use of several types of data sources to study learning-related processes. As computers and sensors become increasingly cheaper and more accessible,  research within this new sub-field grows, yet some gaps remain unexplored. Additionally, there is a research bias toward computer-assisted learning environments, rather than physical ones. At the same time, the current labor market is highly competitive, and possessing profession-related skills is not sufficient to land a job. Besides these skills, there is an increasing demand for social skills, such as communication, teamwork, and collaboration. However, there is a gap between the skills that are trained in an academic setting and the ones that are required by the labor market. Having this background in mind, this work aims at designing and evaluating an IoT sensor system capable of tracking patterns observed under social interactions within a group, and more specifically, in terms of the distance between group members while solving a task. Another important aspect of this study is the system's cost-effectiveness so that it can be employed in a scalable and sustainable manner. To achieve this goal, a multimethodological approach for Design Science Research was adopted, which implied the combination of several methods such as sketching, prototyping, and testing. As a result, this study contributes both to the research area of Multimodal Learning Analytics, and to educational practices.
7

An inquiry into the efficacy ofconvolutional neural networks in low-resolution video feeds for object detection / En undersökning gällande effektiviteten av convolutional neurala nätverk i låg-kvalitets video-strömmar för objekt detektion

Okanovic, Mirza January 2019 (has links)
In this thesis, various famous models have been investigated and compared to a custom model for people detection in low resolution video feeds. YOLOv3 and SSD in particular are famous models which have, at their time, produced state of the art results on competitions such as ImageNet and COCO. The performance of all models have been compared on speed and accuracy where it was found that YOLOv3 was the slowest and SSD was the fastest. The proposed model was superior in accuracy to both of the aforementioned architectures which can be attributed to addition of newer techniques from research such as leaving activations out and having a carefully balanced loss function. The results seem to suggest that the proposed model is implementable for real-time inference using cheap hardware such as a raspberry pi 3B+ coupled with one or more AI accelerator stickssuch as the Intel Neural Compute Stick 2 and that the networks are usable for detection even in bad video streams. / I denna uppsats så har olika kända modeller undersökts och jämförts med en ny modell för människodetektering i lågkvalitets videoströmmar. YOLOv3 och SSD mer specifikt är kända modeller som, för sin tid, producerade topp resultat på tävlingar såsom ImageNet och COCO. Prestandan för alla modeller jämfördes medavseende på hastighet och träffsäkerhet där det hittades att YOLOv3 var den långsammaste och SSD var den snabbaste. Den förslagna modellen var träffsäkrare än båda tidigarenämnda modeller vilket kan attribueras till att nya tekniker från forskning har tillämpats såsom att låta vissa aktiveringsfunktioner utebli och att ha en försiktigt balanserad förlust funktion. Resultaten pekar mot att den förslagna modellen kan implementeras för bruk i real tid på billig hårdvara såsom en Raspberry pi 3B+ tillsammans med en eller flera AI accelerations stickor så som Intel Neural Compute Stick 2 samt att nätverken är användbara för detektion även i dåliga videoströmmar.
8

3D detection and pose estimation of medical staff in operating rooms using RGB-D images / Détection et estimation 3D de la pose des personnes dans la salle opératoire à partir d'images RGB-D

Kadkhodamohammadi, Abdolrahim 01 December 2016 (has links)
Dans cette thèse, nous traitons des problèmes de la détection des personnes et de l'estimation de leurs poses dans la Salle Opératoire (SO), deux éléments clés pour le développement d'applications d'assistance chirurgicale. Nous percevons la salle grâce à des caméras RGB-D qui fournissent des informations visuelles complémentaires sur la scène. Ces informations permettent de développer des méthodes mieux adaptées aux difficultés propres aux SO, comme l'encombrement, les surfaces sans texture et les occlusions. Nous présentons des nouvelles approches qui tirent profit des informations temporelles, de profondeur et des vues multiples afin de construire des modèles robustes pour la détection des personnes et de leurs poses. Une évaluation est effectuée sur plusieurs jeux de données complexes enregistrés dans des salles opératoires avec une ou plusieurs caméras. Les résultats obtenus sont très prometteurs et montrent que nos approches surpassent les méthodes de l'état de l'art sur ces données cliniques. / In this thesis, we address the two problems of person detection and pose estimation in Operating Rooms (ORs), which are key ingredients in the development of surgical assistance applications. We perceive the OR using compact RGB-D cameras that can be conveniently integrated in the room. These sensors provide complementary information about the scene, which enables us to develop methods that can cope with numerous challenges present in the OR, e.g. clutter, textureless surfaces and occlusions. We present novel part-based approaches that take advantage of depth, multi-view and temporal information to construct robust human detection and pose estimation models. Evaluation is performed on new single- and multi-view datasets recorded in operating rooms. We demonstrate very promising results and show that our approaches outperform state-of-the-art methods on this challenging data acquired during real surgeries.
9

Sledování více osob ve videu z jedné kamery / Multi-Person Tracking in Video from Mono-Camera

Vojvoda, Jakub January 2016 (has links)
Multiple person detection and tracking is challenging problem with high application potential. The difficulty of the problem is caused mainly by complexity of scene and large variations in articulation and appearance of person. The aim of this work is to design and implement system capable of detecting and tracking people in video from static mono-camera. For this purpose, an online method for tracking has been proposed based on tracking-by-detection approach. The method combines detection, tracking and fusion of responses to achieve accurate results. The implementation was evaluated on available dataset and the results show that it is suitable to use for this task. A method for motion segmentation was proposed and implemented to improve the tracking results. Furthermore, implementation of detector based on histogram of oriented gradients was accelerated by taking advantage of graphics processing unit (GPU).

Page generated in 0.1107 seconds