• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 86
  • 13
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 119
  • 119
  • 23
  • 19
  • 16
  • 16
  • 15
  • 14
  • 13
  • 13
  • 12
  • 11
  • 11
  • 11
  • 10
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Reconfigurable hardware for color space conversion /

Patil, Sreenivas. January 2008 (has links)
Thesis (M.S.)--Rochester Institute of Technology, 2008. / Typescript. Includes bibliographical references (leaves 29-32).
32

Youth culture and the struggle for social space the Nigerian video films /

Ugor, Paul Ushang. January 2009 (has links)
Thesis (Ph.D.)--University of Alberta, 2009. / Title from pdf file main screen (viewed on July 31, 2009). "A thesis submitted to the Faculty of Graduate Studies and Research in partial fulfillment of the requirements for the degree of Doctor of Philosophy, Department of English and Film Studies." Includes bibliographical references.
33

Technology adoption and diffusion in the South African online video Industry: a technopreneurial analysis

Matlabo, Tiisetso January 2017 (has links)
Thesis (M.M. (Entrepreneurship and New Venture Creation)), University of the Witwatersrand, Faculty of Commerce, Law and Management, Wits Business School, 2016. / Over the past few years the South African market has seen the launch of a number of online video services providers. The leading providers in the industry are Vidi, ON-Tap, MTN front row and ShowMax. The industry has also attracted some international competition with big players like Netflix launching its services in the South African market in January 2016. Although this industry has seen the emergence of many new players, it is still in its infacy stages in South Africa and is still to be seen if it will mature into a long term profit making industry. It is important to research the diffusion of innovation and more specially to look at how technopreneurs that are in this field or considering entering this industry can influence the speed and success of how this new innovation is diffused. This research will focus on two areas. Firstly, it will seek to look at the factors that influence the potential adopter’s propensity to adopt a new product. Secondly the research will look at the role played by the technopreneur in ensuring that online video services are adopted successfully. Since the online video services industry is not yet mature the research was conducted using the mixed method approach. The quantitative research was conducted by distributing online survey questionnaires. These questionnaires were distributed using email, as well as social media networks such as Facebook, Twitter and Linkedin. The qualitative research was conducted by performing interviews with a predetermined list of respondents. The combination of the two types of research led to a better understanding of this topic. The results the research highlighted the fact that the South African market poses very unique challenges for entrepreneurs that want to enter this industry. South African technopreneurs have an advantage against international players like Netflix because they understand challenges of internet access, payment issues as well as preferred content. / XL2018
34

A enunciação nos vídeos verticais : o protagonismo do corpo /

Pereira, Henrique da Silva. January 2018 (has links)
Orientador: Ana Silvia Lopes Davi Médola / Banca: José Carlos Marques / Banca: Conrado Moreira Mendes / Resumo: Esta pesquisa realizou uma investigação acerca da visualidade de videoclipes produzidos em aspecto vertical, com foco nas estratégias enunciativas destes textos audiovisuais. Objetivando identificar a produção de sentido, bem como as relações de comunicação possibilitadas pelos smartphones, foram analisados os quatro videoclipes verticais mais populares no YouTube, no primeiro semestre de 2018, a partir do aporte teórico-metodológico da semiótica discursiva de linha francesa. A figura do corpo humano mantém-se como o ponto central da representação nesses vídeos, reafirmando o discurso de valorização do indivíduo no universo da produção de videoclipes. A partir do corpus analisado, observou-se que a construção visual desses clipes pode referenciar tanto uma estética relacionada à pratica da selfie, valorizando uma lógica narcísica e de celebridade, como também uma estética e montagem características do videoclipe tradicional, especular e fragmentário, voltado especificamente para a fruição nos smartphones / Abstract: This research carried out an investigation about the visuality of music videos produced in the vertical aspect, focusing on the enunciative strategies of these audiovisual texts. Aiming to identify the production of meaning, as well as the communication relations made possible by smartphones, the four most popular vertical music videos on YouTube, in the first half of 2018, were analyzed from the theoretical and methodological approach of French discursive semiotics. The figure of the human body remains the focus of representation in these videos, reaffirming the discourse of valuing the individual while producing music videos. From the corpus analyzed, it was perceived that the visual construction of these clips can refer to an aesthetic related to the practice of selfie, valuing a narcissistic and celebrity logic, as well as an aesthetic and editing characteristic of traditional, specular and fragmentary music videos, specifically for fruition on smartphones / Mestre
35

Deep Learning for Action Understanding in Video

Shou, Zheng January 2019 (has links)
Action understanding is key to automatically analyzing video content and thus is important for many real-world applications such as autonomous driving car, robot-assisted care, etc. Therefore, in the computer vision field, action understanding has been one of the fundamental research topics. Most conventional methods for action understanding are based on hand-crafted features. Like the recent advances seen in image classification, object detection, image captioning, etc, deep learning has become a popular approach for action understanding in video. However, there remain several important research challenges in developing deep learning based methods for understanding actions. This thesis focuses on the development of effective deep learning methods for solving three major challenges. Action detection at fine granularities in time: Previous work in deep learning based action understanding mainly focuses on exploring various backbone networks that are designed for the video-level action classification task. These did not explore the fine-grained temporal characteristics and thus failed to produce temporally precise estimation of action boundaries. In order to understand actions more comprehensively, it is important to detect actions at finer granularities in time. In Part I, we study both segment-level action detection and frame-level action detection. Segment-level action detection is usually formulated as the temporal action localization task, which requires not only recognizing action categories for the whole video but also localizing the start time and end time of each action instance. To this end, we propose an effective multi-stage framework called Segment-CNN consisting of three segment-based 3D ConvNets: (1) a proposal network identifies candidate segments that may contain actions; (2) a classification network learns one-vs-all action classification model to serve as initialization for the localization network; and (3) a localization network fine-tunes the learned classification network to localize each action instance. In another approach, frame-level action detection is effectively formulated as the per-frame action labeling task. We combine two reverse operations (i.e. convolution and deconvolution) into a joint Convolutional-De-Convolutional (CDC) filter, which simultaneously conducts downsampling in space and upsampling in time to jointly model both high-level semantics and temporal dynamics. We design a novel CDC network to predict actions at frame-level and the frame-level predictions can be further used to detect precise segment boundary for the temporal action localization task. Our method not only improves the state-of-the-art mean Average Precision (mAP) result on THUMOS’14 from 41.3% to 44.4% for the per-frame labeling task, but also improves mAP for the temporal action localization task from 19.0% to 23.3% on THUMOS’14 and from 16.4% to 23.8% on ActivityNet v1.3. Action detection in the constrained scenarios: The usual training process of deep learning models consists of supervision and data, which are not always available in reality. In Part II, we consider the scenarios of incomplete supervision and incomplete data. For incomplete supervision, we focus on the weakly-supervised temporal action localization task and propose AutoLoc which is the first framework that can directly predict the temporal boundary of each action instance with only the video-level annotations available during training. To enable the training of such a boundary prediction model, we design a novel Outer-Inner-Contrastive (OIC) loss to help discover the segment-level supervision and we prove that the OIC loss is differentiable to the underlying boundary prediction model. Our method significantly improves mAP on THUMOS14 from 13.7% to 21.2% and mAP on ActivityNet from 7.4% to 27.3%. For the scenario of incomplete data, we formulate a novel task called Online Detection of Action Start (ODAS) in streaming videos to enable detecting the action start time on the fly when a live video action is just starting. ODAS is important in many applications such as early alert generation to allow timely security or emergency response. Specifically, we propose three novel methods to address the challenges in training ODAS models: (1) hard negative samples generation based on Generative Adversarial Network (GAN) to distinguish ambiguous background, (2) explicitly modeling the temporal consistency between data around action start and data succeeding action start, and (3) adaptive sampling strategy to handle the scarcity of training data. Action understanding in the compressed domain: The mainstream action understanding methods including the aforementioned techniques developed by us require first decoding the compressed video into RGB image frames. This may result in significant cost in terms of storage and computation. Recently, researchers started to investigate how to directly perform action understanding in the compressed domain in order to achieve high efficiency while maintaining the state-of-the-art action detection accuracy. The key research challenge is developing effective backbone networks that can directly take data in the compressed domain as input. Our baseline is to take models developed for action understanding in the decoded domain and adapt them to attack the same tasks in the compressed domain. In Part III, we address two important issues in developing the backbone networks that exclusively operate in the compressed domain. First, compressed videos may be produced by different encoders or encoding parameters, but it is impractical to train a different compressed-domain action understanding model for each different format. We experimentally analyze the effect of video encoder variation and develop a simple yet effective training data preparation method to alleviate the sensitivity to encoder variation. Second, motion cues have been shown to be important for action understanding, but the motion vectors in compressed video are often very noisy and not discriminative enough for directly performing accurate action understanding. We develop a novel and highly efficient framework called DMC-Net that can learn to predict discriminative motion cues based on noisy motion vectors and residual errors in the compressed video streams. On three action recognition benchmarks, namely HMDB-51, UCF101 and a subset of Kinetics, we demonstrate that our DMC-Net can significantly shorten the performance gap between state-of-the-art compressed video based methods with and without optical flow, while being two orders of magnitude faster than the methods that use optical flow. By addressing the three major challenges mentioned above, we are able to develop more robust models for video action understanding and improve performance in various dimensions, such as (1) temporal precision, (2) required levels of supervision, (3) live video analysis ability, and finally (4) efficiency in processing compressed video. Our research has contributed significantly to advancing the state of the art of video action understanding and expanding the foundation for comprehensive semantic understanding of video content.
36

Multimodal News Summarization, Tracking and Annotation Incorporating Tensor Analysis of Memes

Tsai, Chun-Yu January 2017 (has links)
We demonstrate four novel multimodal methods for efficient video summarization and comprehensive cross-cultural news video understanding. First, For video quick browsing, we demonstrate a multimedia event recounting system. Based on nine people-oriented design principles, it summarizes YouTube-like videos into short visual segments (812sec) and textual words (less than 10 terms). In the 2013 Trecvid Multimedia Event Recounting competition, this system placed first in recognition time efficiency, while remaining above average in description accuracy. Secondly, we demonstrate the summarization of large amounts of online international news videos. In order to understand an international event such as Ebola virus, AirAsia Flight 8501 and Zika virus comprehensively, we present a novel and efficient constrained tensor factorization algorithm that first represents a video archive of multimedia news stories concerning a news event as a sparse tensor of order 4. The dimensions correspond to extracted visual memes, verbal tags, time periods, and cultures. The iterative algorithm approximately but accurately extracts coherent quad-clusters, each of which represents a significant summary of an important independent aspect of the news event. We give examples of quad-clusters extracted from tensors with at least 108 entries derived from international news coverage. We show the method is fast, can be tuned to give preferences to any subset of its four dimensions, and exceeds three existing methods in performance. Thirdly, noting that the co-occurrence of visual memes and tags in our summarization result is sparse, we show how to model cross-cultural visual meme influence based on normalized PageRank, which more accurately captures the rates at which visual memes are reposted in a specified time period in a specified culture. Lastly, we establish the correspondences of videos and text descriptions in different cultures by reliable visual cues, detect culture-specific tags for visual memes and then annotate videos in a cultural settings. Starting with any video with less text or no text in one culture (say, US), we select candidate annotations in the text of another culture (say, China) to annotate US video. Through analyzing the similarity of images annotated by those candidates, we can derive a set of proper tags from the viewpoints of another culture (China). We illustrate cultural-based annotation examples by segments of international news. We evaluate the generated tags by cross-cultural tag frequency, tag precision, and user studies.
37

Mean Time Between Visible Artifacts in Visual Communications

Suresh, Nitin 31 May 2007 (has links)
As digital communication of television content becomes more pervasive, and as networks supporting such communication become increasingly diverse, the long-standing problem of assessing video quality by objective measurements becomes particularly important. Content owners as well as content distributors stand to benefit from rapid objective measurements that correlate well with subjective assessments, and further, do not depend on the availability of the original reference video. This thesis investigates different techniques of subjective and objective video evaluation. Our research recommends a functional quality metric called Mean Time Between Failures (MTBF) where failure refers to video artifacts deemed to be perceptually noticeable, and investigates objective measurements that correlate well with subjective evaluations of MTBF. Work has been done for determining the usefulness of some existing objective metric by noting their correlation with MTBF. The research also includes experimentation with network-induced artifacts, and a study on statistical methods for correlating candidate objective measurements with the subjective metric. The statistical significance and spread properties for the correlations are studied, and a comparison of subjective MTBF with the existing subjective measure of MOS is performed. These results suggest that MTBF has a direct and predictable relationship with MOS, and that they have similar variations across different viewers. The research is particularly concerned with the development of new no-reference objective metrics that are easy to compute in real time, as well as correlate better than current metrics with the intuitively appealing MTBF measure. The approach to obtaining greater subjective relevance has included the study of better spatial-temporal models for noise-masking and test data pooling in video perception. A new objective metric, 'Automatic Video Quality' metric (AVQ) is described and shown to be implemented in real time with a high degree of correlation with actual subjective scores, with the correlation values approaching the correlations of metrics that use full or partial reference. This is metric does not need any reference to the original video, and when used to display MPEG2 streams, calculates and indicates the video quality in terms of MTBF. Certain diagnostics like the amount of compression and network artifacts are also shown.
38

Creating a video portfolio for the intermedia artist / Title of accompanying AV material: Video portfolio

Bischoff, LeAnn January 1991 (has links)
The purpose of this creative project was to provide a way for intermedia artists to present their artwork. The creative project is a videotape of the author's works, which are separated into four categories: computer images, two-dimensional animation, photographs and three-dimensional animation. Emphasis was placed on unifying artwork from different mediums. Digital editing effects were used to help distinguish between the various artwork sections. The five minute piece is presented on a VHS tape. / Department of Art
39

Retrospection and deliberation : the create [i.e. creative] summary of the high definition video works / Title of accompanying DVD: Style, grace, praise.

Chu, Xiaoge January 2005 (has links)
This paper reviews the process of video production that was used to create the creative portion of the thesis project. During this process, I experienced creative art theory, creative methods, and new technology applications. For the production of the thesis, I used a high definition digital video camera to illustrate the conflict and fusion between the East and West on the level of cultural mythology. The thesis is comprised of five parts and seven subdivisions:PrefaceStatement of the problemReview of influenceDescription of the artworks, including seven subdivisions:Theme of the projectSelection of creative styleElements of art and cinematographyProject OverviewTransposing the concrete into the abstractExhibit understanding of the language of cinemaCreative application of emerging HDV technologyConclusion and exhibition statement. / Department of Art
40

An experiment in portable escapism : storytelling and the iPod / Title of accompanying DVD: How your life is a story

Gumaste, Nitin S. January 2006 (has links)
This study examines the possibility of creating original video-based content for the video-enabled iPod that was released in October 2005. Current trends show that existing content created for conventional media like television, cinema and computers are simply being ported over to this new medium. However, when this project began, none of the production studios are concentrating on creating content specifically for this medium, which has its own unique properties like portability, screen size and the ability to easily start and pause content as required. The purpose of this project is to prove that such medium-specific content can be created and made financially viable for the creators. Further, this hypothesis is put to the test by presenting it to a group of Ball State University students and their responses are examined in detail. / Department of Telecommunications

Page generated in 0.0952 seconds