• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2973
  • 861
  • 395
  • 288
  • 283
  • 238
  • 207
  • 113
  • 57
  • 52
  • 38
  • 37
  • 34
  • 31
  • 31
  • Tagged with
  • 6708
  • 1046
  • 998
  • 728
  • 612
  • 575
  • 567
  • 513
  • 460
  • 451
  • 449
  • 447
  • 436
  • 410
  • 407
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
581

Marketing a new service by a public utility company: the case study on Videolink.

January 1991 (has links)
by Luk Wai-keung. / Thesis (M.B.A.)--Chinese University of Hong Kong, 1991. / Bibliography: leaf 80. / ABSTRACT --- p.i / TABLE OF CONTENTS --- p.iii / Chapter / Chapter I. --- INTRODUCTION --- p.1 / Proliferation of New Products --- p.1 / Impact of Market Forces --- p.2 / The Case of New Service Launch --- p.4 / Outline of Study --- p.5 / Chapter II. --- METHODOLOGY --- p.6 / Multi-dimensional Approach --- p.6 / Literature Survey --- p.7 / Review of Company's Own Records --- p.8 / Discussion with Product Management Team --- p.9 / Field Observation of Marketing Activities --- p.9 / User Interviews --- p.10 / Chapter III. --- INDUSTRY & COMPANY OVERVIEW --- p.12 / Global Trend in the Telecom Industry --- p.12 / Rapid Development & Proliferation of New Services --- p.12 / Deregulation of Telecommunication Industry --- p.13 / Telecommunication Industry in Hong Kong --- p.14 / The Hong Kong Telephone Company - Monopoly & Its Impacts --- p.17 / Company Strengths --- p.19 / Company Weaknesses --- p.20 / Chapter IV. --- VIDEOLINK SERVICE & ITS DEVELOPMENT IN HONG KONG --- p.22 / What is VideoLink ? --- p.22 / Some Technical Information --- p.23 / VideoLink Service Development in Hong Kong --- p.24 / Development History --- p.24 / Service Format & Pricing --- p.26 / Preliminary Assessment of Marketing Progress --- p.27 / Chapter V. --- VIDEOLINK MARKETING PROFILES --- p.29 / Product Profile --- p.29 / Product Strengths --- p.29 / Product Weaknesses --- p.32 / Competitive Profile --- p.33 / Competing Products & Services --- p.33 / Evaluation of Competitors --- p.34 / Customer Profile --- p.37 / User Requirements & Characteristics --- p.37 / Application Types & Market Potentials --- p.39 / Chapter VI. --- ANALYSING THE MARKETING PROBLEMS --- p.44 / Review of Marketing Program --- p.44 / Product Strategy --- p.45 / Pricing Strategy --- p.45 / Sales & Distribution --- p.46 / Promotion Strategy --- p.47 / Identifying the Marketing Problems --- p.49 / Exploring the Underlying Causes --- p.52 / Technology Driven Culture --- p.52 / Monopoly Status --- p.54 / Organisational Hindrance --- p.55 / Chapter VII. --- STRATEGIC RECOMMENDATION --- p.57 / Reformulating the Marketing Program --- p.57 / Identifying the Target Market --- p.57 / The Product Offerings --- p.60 / The Promotion Mix --- p.62 / The Sales Strategy --- p.63 / Establishing a Marketing Orientation --- p.64 / Chapter VIII. --- CONCLUSION --- p.70 / APPENDIX / Chapter 1 --- User Interviews Discussion Guideline --- p.73 / Chapter 2 --- VideoLink Tariff Schedule --- p.75 / Chapter 3 --- Customer Profile Analysis --- p.76 / Chapter 4 --- Orangization Structure of VideoLink Team --- p.77 / Chapter 5 --- VideoLink Service Configuration - Comparison of Fixed & Switched Connections --- p.78 / Chapter 6 --- Hong Kong Telephone Corporate Vision Program --- p.79 / BIBLIOGRAPHY --- p.80
582

Object-based scalable wavelet image and video coding. / CUHK electronic theses & dissertations collection

January 2008 (has links)
The first part of this thesis studies advanced wavelet transform techniques for scalable still image object coding. In order to adapt to the content of a given signal and obtain more flexible adaptive representation, two advanced wavelet transform techniques, wavelet packet transform and directional wavelet transform, are developed for object-based image coding. Extensive experiments demonstrate that the new wavelet image coding systems perform comparable to or better than state-of-the-art in image compression while possessing some attractive features such as object-based coding functionality and high coding scalability. / The objective of this thesis is to develop an object-based coding framework built upon a family of wavelet coding techniques for a variety of arbitrarily shaped visual object scalable coding applications. Two kinds of arbitrarily shaped visual object scalable coding techniques are investigated in this thesis. One is object-based scalable wavelet still image coding; another is object-based scalable wavelet video coding. / The second part of this thesis investigates various components of object-based scalable wavelet video coding. A generalized 3-D object-based directional threading, which unifies the concepts of temporal motion threading and spatial directional threading, is seamlessly incorporated into 3-D shape-adaptive directional wavelet transform to exploit the spatio-temporal correlation inside the 3-D video object. To improve the computational efficiency of multi-resolution motion estimation (MRME) in shift-invariant wavelet domain, two fast MRME algorithms are proposed for wavelet-based scalable video coding. As demonstrated in the experiments, the proposed 3-D object-based wavelet video coding techniques consistently outperform MPEG-4 and other wavelet-based schemes for coding arbitrarily shaped video object, while providing full spatio-temporal-quality scalability with non-redundant 3-D subband decomposition. / Liu, Yu. / Adviser: King Ngi Ngan. / Source: Dissertation Abstracts International, Volume: 70-06, Section: B, page: 3693. / Thesis (Ph.D.)--Chinese University of Hong Kong, 2008. / Includes bibliographical references (leaves 166-173). / Electronic reproduction. Hong Kong : Chinese University of Hong Kong, [2012] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Electronic reproduction. [Ann Arbor, MI] : ProQuest Information and Learning, [200-] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Abstracts in English and Chinese. / School code: 1307.
583

End to end Multi-Objective Optimisation of H.264 and HEVC CODECs

Al Barwani, Maryam Mohsin Salim January 2018 (has links)
All multimedia devices now incorporate video CODECs that comply with international video coding standards such as H.264 / MPEG4-AVC and the new High Efficiency Video Coding Standard (HEVC) otherwise known as H.265. Although the standard CODECs have been designed to include algorithms with optimal efficiency, large number of coding parameters can be used to fine tune their operation, within known constraints of for e.g., available computational power, bandwidth, consumer QoS requirements, etc. With large number of such parameters involved, determining which parameters will play a significant role in providing optimal quality of service within given constraints is a further challenge that needs to be met. Further how to select the values of the significant parameters so that the CODEC performs optimally under the given constraints is a further important question to be answered. This thesis proposes a framework that uses machine learning algorithms to model the performance of a video CODEC based on the significant coding parameters. Means of modelling both the Encoder and Decoder performance is proposed. We define objective functions that can be used to model the performance related properties of a CODEC, i.e., video quality, bit-rate and CPU time. We show that these objective functions can be practically utilised in video Encoder/Decoder designs, in particular in their performance optimisation within given operational and practical constraints. A Multi-objective Optimisation framework based on Genetic Algorithms is thus proposed to optimise the performance of a video codec. The framework is designed to jointly minimize the CPU Time, Bit-rate and to maximize the quality of the compressed video stream. The thesis presents the use of this framework in the performance modelling and multi-objective optimisation of the most widely used video coding standard in practice at present, H.264 and the latest video coding standard, H.265/HEVC. When a communication network is used to transmit video, performance related parameters of the communication channel will impact the end-to-end performance of the video CODEC. Network delays and packet loss will impact the quality of the video that is received at the decoder via the communication channel, i.e., even if a video CODEC is optimally configured network conditions will make the experience sub-optimal. Given the above the thesis proposes a design, integration and testing of a novel approach to simulating a wired network and the use of UDP protocol for the transmission of video data. This network is subsequently used to simulate the impact of packet loss and network delays on optimally coded video based on the framework previously proposed for the modelling and optimisation of video CODECs. The quality of received video under different levels of packet loss and network delay is simulated, concluding the impact on transmitted video based on their content and features.
584

Evaluating Student Use Patterns of Streaming Video Lecture Capture in a Large Undergraduate Classroom

Whitley-Grassi, Nathan E. 01 January 2017 (has links)
Large classes that allow smaller amounts of instructor-student interaction have become more common in today's colleges. The best way to provide needed opportunities for students to overcome this lack of interaction with instructors remains unidentified. This research evaluated the use of video lecture capture (VLC) as a supplemental method for teacher-student interaction and what, if any, impact it and attendance have on student performance in large lecture courses. This ex post facto study conducted at a Northeastern research university utilized cognitive and andragogical frameworks to examine the relationships between the independent variables frequency of video viewing, quantity of videos viewed, and course attendance, as well as their impact on course performance in a large lecture course (N=329). Data sources included archival data from the learning management system and student survey responses. Analysis included a series of two-way ANOVA tests. The results indicated that the frequency of video viewing was found to have a significant positive effect on course performance (F = 3.018, p = .030). The number of VLC videos not viewed was also found to have a significant negative effect on course performance (F = 1.875, p = 0.016). Other independent variables were not found to have any significant main effect or interaction effect with the dependent variable, course performance. Findings from this research may be used by educators, students, and administrators planning course sizes and availability to better understand the relationship between these variables and how VLC can be used effectively in large lecture classes thus leading to improved efficacy in VLC use.
585

Enhancing oral comprehension and emotional recognition skills in children with autism: A comparison of video self modelling with video peer modelling

Koretz, Jasmine May January 2007 (has links)
Video modelling has been shown to be an effective intervention with autistic individuals as it takes into account autistic characteristics of those individuals. Research on video self modelling and video peer modelling with this population has shown both are effective. The purpose of this study was to replicate past findings that video modelling is an effective strategy for autistic individuals, and to compare video self modelling with video peer modelling, to determine which is more effective. The studies here used multiple baselines with alternating treatments designs with 6 participants across two target behaviours; emotional recognition and oral comprehension. The first compared the video modelling methods and found neither method increased the target behaviours to criterion, for 5 out of the 6 participants. For 1 participant the criterion was only reached for the video self modelling condition for the target behaviour 'oral comprehension'. The second study first examined the effectiveness of video self modelling and video peer modelling with supplementary assistance for 4 participants. Second, it examined a new peer video for a 5th participant, and third, it compared the two video modelling methods (with supplementary assistance). Results indicated 1 participant reached the criterion in both video modelling conditions, 1 participant showed improvements and 2 participants never increased responding. This study indicated that clarity of speech produced by the peer participant in the peer video, may have contributed to a participant's level of correct responding. This is because a new peer video used during the second study dramatically increased this participants responding. Intervention fidelity, generalisation and follow-up data were examined. Measures of intervention fidelity indicated procedural reliability. Generalisation was unsuccessful across three measures and follow-up data indicated similar trends to intervention. Only video self modelling effects remained at criterion during follow-up. Results are discussed with reference to limitations, future research and implications for practice.
586

Video sequence synchronization

Wedge, Daniel John January 2008 (has links)
[Truncated abstract] Video sequence synchronization is necessary for any computer vision application that integrates data from multiple simultaneously recorded video sequences. With the increased availability of video cameras as either dedicated devices, or as components within digital cameras or mobile phones, a large volume of video data is available as input for a growing range of computer vision applications that process multiple video sequences. To ensure that the output of these applications is correct, accurate video sequence synchronization is essential. Whilst hardware synchronization methods can embed timestamps into each sequence on-the-fly, they require specialized hardware and it is necessary to set up the camera network in advance. On the other hand, computer vision-based software synchronization algorithms can be used to post-process video sequences recorded by cameras that are not networked, such as common consumer hand-held video cameras or cameras embedded in mobile phones, or to synchronize historical videos for which hardware synchronization was not possible. The current state-of-the-art software algorithms vary in their input and output requirements and camera configuration assumptions. ... Next, I describe an approach that synchronizes two video sequences where an object exhibits ballistic motions. Given the epipolar geometry relating the two cameras and the imaged ballistic trajectory of an object, the algorithm uses a novel iterative approach that exploits object motion to rapidly determine pairs of temporally corresponding frames. This algorithm accurately synchronizes videos recorded at different frame rates and takes few iterations to converge to sub-frame accuracy. Whereas the method presented by the first algorithm integrates tracking data from all frames to synchronize the sequences as a whole, this algorithm recovers the synchronization by locating pairs of temporally corresponding frames in each sequence. Finally, I introduce an algorithm for synchronizing two video sequences recorded by stationary cameras with unknown epipolar geometry. This approach is unique in that it recovers both the frame rate ratio and the frame offset of the two sequences by finding matching space-time interest points that represent events in each sequence; the algorithm does not require object tracking. RANSAC-based approaches that take a set of putatively matching interest points and recover either a homography or a fundamental matrix relating a pair of still images are well known. This algorithm extends these techniques using space-time interest points in place of spatial features, and uses nested instances of RANSAC to also recover the frame rate ratio and frame offset of a pair of video sequences. In this thesis, it is demonstrated that each of the above algorithms can accurately recover the frame rate ratio and frame offset of a range of real video sequences. Each algorithm makes a contribution to the body of video sequence synchronization literature, and it is shown that the synchronization problem can be solved using a range of approaches.
587

Resource list for video production in the local church

Gascho, Timothy N. January 2006 (has links)
Thesis (Th. M.)--Dallas Theological Seminary, 2006. / Includes bibliographical references (leaves 54-58).
588

Video transmission over wireless networks

Zhao, Shengjie 29 August 2005 (has links)
Compressed video bitstream transmissions over wireless networks are addressed in this work. We first consider error control and power allocation for transmitting wireless video over CDMA networks in conjunction with multiuser detection. We map a layered video bitstream to several CDMA fading channels and inject multiple source/parity layers into each of these channels at the transmitter. We formulate a combined optimization problem and give the optimal joint rate and power allocation for each of linear minimum mean-square error (MMSE) multiuser detector in the uplink and two types of blind linear MMSE detectors, i.e., the direct-matrix-inversion (DMI) blind detector and the subspace blind detector, in the downlink. We then present a multiple-channel video transmission scheme in wireless CDMA networks over multipath fading channels. For a given budget on the available bandwidth and total transmit power, the transmitter determines the optimal power allocations and the optimal transmission rates among multiple CDMA channels, as well as the optimal product channel code rate allocation. We also make use of results on the large-system CDMA performance for various multiuser receivers in multipath fading channels. We employ a fast joint source-channel coding algorithm to obtain the optimal product channel code structure. Finally, we propose an end-to-end architecture for multi-layer progressive video delivery over space-time differentially coded orthogonal frequency division multiplexing (STDC-OFDM) systems. We propose to use progressive joint source-channel coding to generate operational transmission distortion-power-rate (TD-PR) surfaces. By extending the rate-distortion function in source coding to the TD-PR surface in joint source-channel coding, our work can use the ??equal slope?? argument to effectively solve the transmission rate allocation problem as well as the transmission power allocation problem for multi-layer video transmission. It is demonstrated through simulations that as the wireless channel conditions change, these proposed schemes can scale the video streams and transport the scaled video streams to receivers with a smooth change of perceptual quality.
589

Computational video: post-processing methods for stabilization, retargeting and segmentation

Grundmann, Matthias 05 April 2013 (has links)
In this thesis, we address a variety of challenges for analysis and enhancement of Computational Video. We present novel post-processing methods to bridge the difference between professional and casually shot videos mostly seen on online sites. Our research presents solutions to three well-defined problems: (1) Video stabilization and rolling shutter removal in casually-shot, uncalibrated videos; (2) Content-aware video retargeting; and (3) spatio-temporal video segmentation to enable efficient video annotation. We showcase several real-world applications building on these techniques. We start by proposing a novel algorithm for video stabilization that generates stabilized videos by employing L1-optimal camera paths to remove undesirable motions. We compute camera paths that are optimally partitioned into constant, linear and parabolic segments mimicking the camera motions employed by professional cinematographers. To achieve this, we propose a linear programming framework to minimize the first, second, and third derivatives of the resulting camera path. Our method allows for video stabilization beyond conventional filtering, that only suppresses high frequency jitter. An additional challenge in videos shot from mobile phones are rolling shutter distortions. Modern CMOS cameras capture the frame one scanline at a time, which results in non-rigid image distortions such as shear and wobble. We propose a solution based on a novel mixture model of homographies parametrized by scanline blocks to correct these rolling shutter distortions. Our method does not rely on a-priori knowledge of the readout time nor requires prior camera calibration. Our novel video stabilization and calibration free rolling shutter removal have been deployed on YouTube where they have successfully stabilized millions of videos. We also discuss several extensions to the stabilization algorithm and present technical details behind the widely used YouTube Video Stabilizer. We address the challenge of changing the aspect ratio of videos, by proposing algorithms that retarget videos to fit the form factor of a given device without stretching or letter-boxing. Our approaches use all of the screen's pixels, while striving to deliver as much video-content of the original as possible. First, we introduce a new algorithm that uses discontinuous seam-carving in both space and time for resizing videos. Our algorithm relies on a novel appearance-based temporal coherence formulation that allows for frame-by-frame processing and results in temporally discontinuous seams, as opposed to geometrically smooth and continuous seams. Second, we present a technique, that builds on the above mentioned video stabilization approach. We effectively automate classical pan and scan techniques by smoothly guiding a virtual crop window via saliency constraints. Finally, we introduce an efficient and scalable technique for spatio-temporal segmentation of long video sequences using a hierarchical graph-based algorithm. We begin by over-segmenting a volumetric video graph into space-time regions grouped by appearance. We then construct a "region graph" over the obtained segmentation and iteratively repeat this process over multiple levels to create a tree of spatio-temporal segmentations. This hierarchical approach generates high quality segmentations, and allows subsequent applications to choose from varying levels of granularity. We demonstrate the use of spatio-temporal segmentation as users interact with the video, enabling efficient annotation of objects within the video.
590

Exploiting Information Extraction Techniques For Automatic Semantic Annotation And Retrieval Of News Videos In Turkish

Kucuk, Dilek 01 February 2011 (has links) (PDF)
Information extraction (IE) is known to be an effective technique for automatic semantic indexing of news texts. In this study, we propose a text-based fully automated system for the semantic annotation and retrieval of news videos in Turkish which exploits several IE techniques on the video texts. The IE techniques employed by the system include named entity recognition, automatic hyperlinking, person entity extraction with coreference resolution, and event extraction. The system utilizes the outputs of the components implementing these IE techniques as the semantic annotations for the underlying news video archives. Apart from the IE components, the proposed system comprises a news video database in addition to components for news story segmentation, sliding text recognition, and semantic video retrieval. We also propose a semi-automatic counterpart of system where the only manual intervention takes place during text extraction. Both systems are executed on genuine video data sets consisting of videos broadcasted by Turkish Radio and Television Corporation. The current study is significant as it proposes the first fully automated system to facilitate semantic annotation and retrieval of news videos in Turkish, yet the proposed system and its semi-automated counterpart are quite generic and hence they could be customized to build similar systems for video archives in other languages as well. Moreover, IE research on Turkish texts is known to be rare and within the course of this study, we have proposed and implemented novel techniques for several IE tasks on Turkish texts. As an application example, we have demonstrated the utilization of the implemented IE components to facilitate multilingual video retrieval.

Page generated in 0.0521 seconds