• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 5
  • 1
  • Tagged with
  • 6
  • 6
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

The Impact of the Use of Wearable Video Systems in Law Enforcement

Hoard, DeAris Vontae 01 January 2019 (has links)
Wearable video systems (WVSs) are one of the most popular and fastest growing technologies used by law enforcement today. While published WVS literature predominantly focuses on stakeholder perceptions, community interactions, assaults against officers, and use of force, there has diminutive exploration of the impact of WVSs as it related to aspects of police misconduct, especially in the Cruiser Police Department (pseudonym; CPD). The purpose of this mixed methods study was to explore and describe how the use of the use of WVSs by the CPD impact police misconduct, by tracking the changes in complaint type and disposition of a 5-year period, and to examine how CPD officers perceive the impact of the use of WVSs. Deterrence theory and phenomenology provided structure for this research study. The quantitative portion of this study consisted of an interrupted time series analysis of 419 documented complaints against CPD officers between June 2013 and June 2018. The qualitative portion consisted of 67 anonymous, online surveys completed by current CPD officers with WVS experience that were thematically analyzed. Quantitative findings included a 13% overall increase in the number of complaints, a 15% drop in citizen complaints, a 28% increase in chief-initiated complaints, and a 41% increase in sustained complaints. Qualitative findings provided insight into CPD officers' acceptance and value of WVS, along with their strong concern for WVSs implementation creating more discipline of officers. Implications for positive social change include an awareness of unintended consequences of current policies and practices and empirical awareness of trends associated with WVS, specifically regarding discipline, officer acceptance, and police-community interaction.
2

The Impact of the Use of Wearable Video Systems in Law Enforcement

Hoard, DeAris Vontae 01 January 2019 (has links)
Wearable video systems (WVSs) are one of the most popular and fastest growing technologies used by law enforcement today. While published WVS literature predominantly focuses on stakeholder perceptions, community interactions, assaults against officers, and use of force, there has diminutive exploration of the impact of WVSs as it related to aspects of police misconduct, especially in the Cruiser Police Department (pseudonym; CPD). The purpose of this mixed methods study was to explore and describe how the use of the use of WVSs by the CPD impact police misconduct, by tracking the changes in complaint type and disposition of a 5-year period, and to examine how CPD officers perceive the impact of the use of WVSs. Deterrence theory and phenomenology provided structure for this research study. The quantitative portion of this study consisted of an interrupted time series analysis of 419 documented complaints against CPD officers between June 2013 and June 2018. The qualitative portion consisted of 67 anonymous, online surveys completed by current CPD officers with WVS experience that were thematically analyzed. Quantitative findings included a 13% overall increase in the number of complaints, a 15% drop in citizen complaints, a 28% increase in chief-initiated complaints, and a 41% increase in sustained complaints. Qualitative findings provided insight into CPD officers' acceptance and value of WVS, along with their strong concern for WVSs implementation creating more discipline of officers. Implications for positive social change include an awareness of unintended consequences of current policies and practices and empirical awareness of trends associated with WVS, specifically regarding discipline, officer acceptance, and police-community interaction.
3

Development and evaluation of a terrestrial animal-borne video system for ecological research

Moll, Remington James, January 2008 (has links)
Thesis (M.S.)--University of Missouri-Columbia, 2008. / The entire dissertation/thesis text is included in the research.pdf file; the official abstract appears in the short.pdf file (which also appears in the research.pdf); a non-technical general description, or public abstract, appears in the public.pdf file. Title from title screen of research.pdf file (viewed on September 12, 2008) Vita. Includes bibliographical references.
4

Very Low Bitrate Video Communication : A Principal Component Analysis Approach

Söderström, Ulrik January 2008 (has links)
A large amount of the information in conversations come from non-verbal cues such as facial expressions and body gesture. These cues are lost when we don't communicate face-to-face. But face-to-face communication doesn't have to happen in person. With video communication we can at least deliver information about the facial mimic and some gestures. This thesis is about video communication over distances; communication that can be available over networks with low capacity since the bitrate needed for video communication is low. A visual image needs to have high quality and resolution to be semantically meaningful for communication. To deliver such video over networks require that the video is compressed. The standard way to compress video images, used by H.264 and MPEG-4, is to divide the image into blocks and represent each block with mathematical waveforms; usually frequency features. These mathematical waveforms are quite good at representing any kind of video since they do not resemble anything; they are just frequency features. But since they are completely arbitrary they cannot compress video enough to enable use over networks with limited capacity, such as GSM and GPRS. Another issue is that such codecs have a high complexity because of the redundancy removal with positional shift of the blocks. High complexity and bitrate means that a device has to consume a large amount of energy for encoding, decoding and transmission of such video; with energy being a very important factor for battery-driven devices. Drawbacks of standard video coding mean that it isn't possible to deliver video anywhere and anytime when it is compressed with such codecs. To resolve these issues we have developed a totally new type of video coding. Instead of using mathematical waveforms for representation we use faces to represent faces. This makes the compression much more efficient than if waveforms are used even though the faces are person-dependent. By building a model of the changes in the face, the facial mimic, this model can be used to encode the images. The model consists of representative facial images and we use a powerful mathematical tool to extract this model; namely principal component analysis (PCA). This coding has very low complexity since encoding and decoding only consist of multiplication operations. The faces are treated as single encoding entities and all operations are performed on full images; no block processing is needed. These features mean that PCA coding can deliver high quality video at very low bitrates with low complexity for encoding and decoding. With the use of asymmetrical PCA (aPCA) it is possible to use only semantically important areas for encoding while decoding full frames or a different part of the frames. We show that a codec based on PCA can compress facial video to a bitrate below 5 kbps and still provide high quality. This bitrate can be delivered on a GSM network. We also show the possibility of extending PCA coding to encoding of high definition video.
5

Learning descriptive models of objects and activities from egocentric video

Fathi, Alireza 29 August 2013 (has links)
Recent advances in camera technology have made it possible to build a comfortable, wearable system which can capture the scene in front of the user throughout the day. Products based on this technology, such as GoPro and Google Glass, have generated substantial interest. In this thesis, I present my work on egocentric vision, which leverages wearable camera technology and provides a new line of attack on classical computer vision problems such as object categorization and activity recognition. The dominant paradigm for object and activity recognition over the last decade has been based on using the web. In this paradigm, in order to learn a model for an object category like coffee jar, various images of that object type are fetched from the web (e.g. through Google image search), features are extracted and then classifiers are learned. This paradigm has led to great advances in the field and has produced state-of-the-art results for object recognition. However, it has two main shortcomings: a) objects on the web appear in isolation and they miss the context of daily usage; and b) web data does not represent what we see every day. In this thesis, I demonstrate that egocentric vision can address these limitations as an alternative paradigm. I will demonstrate that contextual cues and the actions of a user can be exploited in an egocentric vision system to learn models of objects under very weak supervision. In addition, I will show that measurements of a subject's gaze during object manipulation tasks can provide novel feature representations to support activity recognition. Moving beyond surface-level categorization, I will showcase a method for automatically discovering object state changes during actions, and an approach to building descriptive models of social interactions between groups of individuals. These new capabilities for egocentric video analysis will enable new applications in life logging, elder care, human-robot interaction, developmental screening, augmented reality and social media.
6

Localisation à partir de caméra vidéo portée

Dovgalecs, Vladislavs 05 December 2011 (has links) (PDF)
L'indexation par le contenu de lifelogs issus de capteurs portés a émergé comme un enjeu à forte valeur ajoutée, permettant l'exploitation de ces nouveaux types de donnés. Rendu plus accessible par la récente disponibilité de dispositifs miniaturisés d'enregistrement, les besoins se sont accrus pour l'extraction automatique d'informations pertinentes à partir de contenus générés par de tels dispositifs. Entre autres applications, la localisation en environnement intérieur est l'un des verrous que nous abordons dans cette thèse. Beaucoup des solutions existantes pour la localisation fonctionnent insuffisamment bien ou nécessitent une intervention manuelle importante. Dans cette thèse, nous abordons le problème de la localisation topologique à partir de séquences vidéo issues d'une camera portée en utilisant une approche purement visuelle. Ce travail complète d'extraction des descripteurs visuels de bas niveaux jusqu'à l'estimation finale de la localisation à l'aide d'algorithmes automatiques. Dans ce cadre, les contributions principales de ce travail concernent l'exploitation efficace des informations apportées par des descripteurs visuels multiples, par les images non étiquetées et par la continuité temporelle de la vidéo. Ainsi, la fusion précoce et la fusion tardive des données visuelles ont été examinées et l'avantage apporté par la complémentarité des descripteurs visuels a été mis en évidence sur le problème de la localisation. En raison de difficulté à obtenir des données étiquetées en quantités suffisantes, l'ensemble des données a été exploité ; d'une part les approches de réduction de dimensionnalité non-linéaire ont été appliquées, afin d'améliorer la taille des données à traiter et la complexité associée; d'autre part des approches semi-supervisés ont été étudiées pour utiliser l'information supplémentaire apportée par les images non étiquetées lors de la classification. Ces éléments ont été analysé séparément et ont été mis en oeuvre ensemble sous la forme d'une nouvelle méthode par co-apprentissage avec information temporelle. Finalement nous avons également exploré la question de l'invariance des descripteurs, en proposant l'utilisation d'un apprentissage invariant à la transformation spatiale, comme une autre réponse possible au manque de données annotées et à la variabilité visuelle. Ces méthodes ont été évaluées sur des séquences vidéo en environnement contrôlé accessibles publiquement pour évaluer le gain spécifique de chaque contribution. Ce travail a également été appliqué dans le cadre du projet IMMED, qui concerne l'observation et l'indexation d'activités de la vie quotidienne dans un objectif d'aide au diagnostic médical, à l'aide d'une caméra vidéo portée. Nous avons ainsi pu mettre en oeuvre le dispositif d'acquisition vidéo portée et montrer le potentiel de notre approche pour l'estimation de la localisation topologique sur un corpus présentant des conditions difficiles représentatives des données réelles.

Page generated in 0.0399 seconds