• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2985
  • 861
  • 395
  • 288
  • 283
  • 238
  • 207
  • 115
  • 57
  • 53
  • 38
  • 37
  • 34
  • 32
  • 31
  • Tagged with
  • 6729
  • 1050
  • 1001
  • 730
  • 616
  • 578
  • 568
  • 514
  • 465
  • 452
  • 450
  • 450
  • 439
  • 411
  • 408
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Producing a low-budget, feature-length entertainment program on video

Ingham, Dale. January 1991 (has links)
Thesis (M.S.)--Kutztown University of Pennsylvania, 1991. / Source: Masters Abstracts International, Volume: 45-06, page: 2807. Abstract precedes thesis as [4] preliminary leaves. Typescript. Includes bibliographical references (leaf 51).
12

EXPENDABLE LAUNCH VEHICLE VIDEO SYSTEM

Brierley, Scott, Lothringer, Roy 10 1900 (has links)
International Telemetering Conference Proceedings / October 20-23, 2003 / Riviera Hotel and Convention Center, Las Vegas, Nevada / The Delta expendable launch vehicle has been flying onboard video cameras. The camera is an NTSC analog camera that directly modulates an FM transmitter. A standard FM deviation is used to maximize link performance while minimizing transmitted bandwidth. Pre-emphasis per CCIR recommendation 405 is used to improve the video signal-to-noise ratio. The camera and transmitter obtain power from either a separate battery or the vehicle power system. Lighting is provided by sunlight, or a light may be added when sunlight is unavailable. Multiple cameras are accommodated by either using multiple transmitters or by switching the individual cameras in flight. IRIG-B timing is used to correlate the video with other vehicle telemetry.
13

Nietzsche, Deleuze and video art

Law, Sum-po, Jamsen., 羅琛堡. January 2000 (has links)
published_or_final_version / Literary and Cultural Studies / Master / Master of Arts
14

Compression of image sequences using a non-Markov linear predictor

McAllister, Graham January 2000 (has links)
No description available.
15

The use of television for the teaching and learning of mathematics in secondary school

Norman, Naomi January 2000 (has links)
No description available.
16

Video object segmentation. / 視頻物件分割法 / Video object segmentation. / Shi pin wu jian fen ge fa

January 2004 (has links)
Mak Chun Man = 視頻物件分割法 / 麥振文. / Thesis (M.Phil.)--Chinese University of Hong Kong, 2004. / Includes bibliographical references. / Text in English; abstracts in English and Chinese. / Mak Chun Man = Shi pin wu jian fen ge fa / Mai Zhenwen. / List of Figures --- p.III / List of Tables --- p.III / Chapter Chapter 1. --- Introduction --- p.1-1 / Chapter 1.1. --- A Brief Review on Video Objects Segmentation --- p.1-1 / Chapter 1.2. --- Objective of the Research Work --- p.1-3 / Chapter 1.3. --- Organization of the Thesis --- p.1-4 / Chapter 1.4. --- Notes on Publication --- p.1-5 / Chapter Chapter 2. --- Background Information --- p.2-1 / Chapter 2.1. --- Introduction --- p.2-1 / Chapter 2.2. --- Review of common video coding standards --- p.2-3 / Chapter 2.2.1 --- H.261 --- p.2-3 / Chapter 2.2.2 --- MPEG-1 --- p.2-4 / Chapter 2.2.3 --- MPEG-2 --- p.2-4 / Chapter 2.2.4 --- MPEG-4 --- p.2-5 / Chapter 2.3. --- Reviews of video objects segmentation methods --- p.2-7 / Chapter 2.3.1. --- Motion Segmentation --- p.2-8 / Chapter 2.3.2 --- Temporal & Spatial Segmentation --- p.2-9 / Chapter 2.3.2.1 --- Change Detection --- p.2-10 / Chapter 2.3.2.2 --- Morphological Filtering --- p.2-11 / Chapter 2.3.2.3 --- Image Segmentation --- p.2-12 / Chapter 2.3.2.4 --- Active Contour - Snake --- p.2-13 / Chapter 2.3.3 --- Application specific & human aided --- p.2-13 / Chapter 2.3.3.1 --- Manual Object Extraction --- p.2-13 / Chapter 2.3.3.2 --- Static Camera --- p.2-14 / Chapter 2.3.3.3 --- 3D video --- p.2-15 / Chapter 2.3.3.4 --- Video Conferencing and Face Segmentation --- p.2-15 / Chapter 2.3.3.5 --- Text Extraction --- p.2-16 / Chapter 2.4. --- Conclusions --- p.2-16 / Chapter Chapter 3. --- Global Motion Estimation --- p.3-1 / Chapter 3.1. --- Introduction --- p.3-1 / Chapter 3.2. --- Background Information --- p.3-2 / Chapter 3.2.1. --- Motion Models --- p.3-2 / Chapter 3.2.2. --- Estimation Methods --- p.3-5 / Chapter 3.3. --- Robust Regression: Least Median of Square Error --- p.3-8 / Chapter 3.3.1. --- Review of Least Median of Square Error --- p.3-8 / Chapter 3.3.2. --- Applying LMedS on Global Motion Estimation --- p.3-11 / Chapter 3.4. --- Modifications to LMedS --- p.3-12 / Chapter 3.5. --- Experimental Results --- p.3-15 / Chapter 3.6. --- Conclusions --- p.3-23 / Chapter 3.7. --- Notes on Publication --- p.3-24 / Chapter Chapter 4. --- System Overview --- p.4-1 / Chapter 4.1. --- Introduction --- p.4-1 / Chapter 4.2. --- Assumptions --- p.4-1 / Chapter 4.2.1. --- Objects in motion --- p.4-2 / Chapter 4.2.2. --- Motion is slow --- p.4-2 / Chapter 4.2.3. --- Change of object shapes --- p.4-2 / Chapter 4.2.4. --- Background size --- p.4-3 / Chapter 4.3. --- System Description --- p.4-3 / Chapter 4.3.1. --- Motion Detection --- p.4-5 / Chapter 4.3.1.1. --- Motion Estimation --- p.4-5 / Chapter 4.3.1.2. --- Global Motion Estimation & Compensation --- p.4-10 / Chapter 4.3.1.3. --- Change Detection Mask --- p.4-12 / Chapter 4.3.1.4. --- FP size thresholding --- p.4-14 / Chapter 4.3.1.5. --- FP clustering --- p.4-15 / Chapter 4.3.2. --- Spatial Features Extraction --- p.4-19 / Chapter 4.3.2.1. --- Edge Detection --- p.4-20 / Chapter 4.3.2.2. --- Region Growing --- p.4-20 / Chapter 4.3.3. --- Labeling and Boundary Tracking --- p.4-21 / Chapter 4.3.3.1. --- Objects' Locations Updates --- p.4-21 / Chapter 4.3.3.2. --- Foreground Pixel Clusters Labeling --- p.4-23 / Chapter 4.3.3.3. --- Slow and Rapid Components Tracking --- p.4-25 / Chapter 4.3.3.4. --- New Model Initialization --- p.4-26 / Chapter 4.3.4. --- Boundary Refinement --- p.4-26 / Chapter 4.3.4.1. --- Filling-in Process --- p.4-27 / Chapter 4.3.4.2. --- Boundary Correction --- p.4-27 / Chapter 4.4. --- Experimental Results --- p.4-32 / Chapter 4.4.1. --- Qualitative Evaluation --- p.4-32 / Chapter 4.4.1.1. --- Summary of the Qualitative Evaluation Results --- p.4-34 / Chapter 4.4.2. --- Quantitative Evaluation --- p.4-35 / Chapter 4.5. --- Conclusions --- p.4-46 / Chapter 4.6. --- Notes on Publications --- p.4-46 / Chapter Chapter 5. --- Conclusions & Future Works --- p.5-1 / Chapter 5.1. --- Contributions and Conclusions --- p.5-1 / Chapter 5.1.1. --- Multiple object support --- p.5-1 / Chapter 5.1.2. --- Global Motion Estimation --- p.5-2 / Chapter 5.2. --- Future Works --- p.5-3 / References
17

Video object segmentation.

January 2006 (has links)
Wei Wei. / Thesis submitted in: December 2005. / Thesis (M.Phil.)--Chinese University of Hong Kong, 2006. / Includes bibliographical references (leaves 112-122). / Abstracts in English and Chinese. / Abstract --- p.II / List of Abbreviations --- p.IV / Chapter Chapter 1 --- Introduction --- p.1 / Chapter 1.1 --- Overview of Content-based Video Standard --- p.1 / Chapter 1.2 --- Video Object Segmentation --- p.4 / Chapter 1.2.1 --- Video Object Plane (VOP) --- p.4 / Chapter 1.2.2 --- Object Segmentation --- p.5 / Chapter 1.3 --- Problems of Video Object Segmentation --- p.6 / Chapter 1.4 --- Objective of the research work --- p.7 / Chapter 1.5 --- Organization of This Thesis --- p.8 / Chapter 1.6 --- Notes on Publication --- p.8 / Chapter Chapter 2 --- Literature Review --- p.10 / Chapter 2.1 --- What is segmentation? --- p.10 / Chapter 2.1.1 --- Manual Segmentation --- p.10 / Chapter 2.1.2 --- Automatic Segmentation --- p.11 / Chapter 2.1.3 --- Semi-automatic segmentation --- p.12 / Chapter 2.2 --- Segmentation Strategy --- p.14 / Chapter 2.3 --- Segmentation of Moving Objects --- p.17 / Chapter 2.3.1 --- Motion --- p.18 / Chapter 2.3.2 --- Motion Field Representation --- p.19 / Chapter 2.3.3 --- Video Object Segmentation --- p.25 / Chapter 2.4 --- Summary --- p.35 / Chapter Chapter 3 --- Automatic Video Object Segmentation Algorithm --- p.37 / Chapter 3.1 --- Spatial Segmentation --- p.38 / Chapter 3.1.1 --- k:-Medians Clustering Algorithm --- p.39 / Chapter 3.1.2 --- Cluster Number Estimation --- p.41 / Chapter 3.1.2 --- Region Merging --- p.46 / Chapter 3.2 --- Foreground Detection --- p.48 / Chapter 3.2.1 --- Global Motion Estimation --- p.49 / Chapter 3.2.2 --- Detection of Moving Objects --- p.50 / Chapter 3.3 --- Object Tracking and Extracting --- p.50 / Chapter 3.3.1 --- Binary Model Tracking --- p.51 / Chapter 3.3.1.2 --- Initial Model Extraction --- p.53 / Chapter 3.3.2 --- Region Descriptor Tracking --- p.59 / Chapter 3.4 --- Results and Discussions --- p.65 / Chapter 3.4.1 --- Objective Evaluation --- p.65 / Chapter 3.4.2 --- Subjective Evaluation --- p.66 / Chapter 3.5 --- Conclusion --- p.74 / Chapter Chapter 4 --- Disparity Estimation and its Application in Video Object Segmentation --- p.76 / Chapter 4.1 --- Disparity Estimation --- p.79 / Chapter 4.1.1. --- Seed Selection --- p.80 / Chapter 4.1.2. --- Edge-based Matching by Propagation --- p.82 / Chapter 4.2 --- Remedy Matching Sparseness by Interpolation --- p.84 / Chapter 4.2 --- Disparity Applications in Video Conference Segmentation --- p.92 / Chapter 4.3 --- Conclusion --- p.106 / Chapter Chapter 5 --- Conclusion and Future Work --- p.108 / Chapter 5.1 --- Conclusion and Contribution --- p.108 / Chapter 5.2 --- Future work --- p.109 / Reference --- p.112
18

Motion-adaptive transforms for highly scalable video compression

Secker, Andrew J, Electrical Engineering & Telecommunications, Faculty of Engineering, UNSW January 2004 (has links)
This thesis investigates motion-adaptive temporal transformations and motion parameter coding schemes, for highly scalable video compression. The first aspect of this work proposes a new framework for constructing temporal discrete wavelet transforms, based on motion-compensated lifting steps. The use of lifting preserves invertibility regardless of the selected motion model. By contrast, the invertibility requirement has restricted previous approaches to either block-based or global motion compensation. We show that the proposed framework effectively applies the temporal wavelet transform along the motion trajectories. Video sequences reconstructed at reduced frame-rates, from subsets of the compressed bitstream, demonstrate the visually pleasing properties expected from lowpass filtering along the motion trajectories. Experimental results demonstrate the effectiveness of temporal wavelet kernels other than the simple Haar. We also demonstrate the benefits of complex motion modelling, by using a deformable triangular mesh. These advances are either incompatible or diffcult to achieve with previously proposed strategies for scalable video compression. A second aspect of this work involves new methods for the representation, compression and rate allocation of the motion information. We first describe a compact representation for the various motion mappings associated with the proposed lifting transform. This representation significantly reduces the number of distinct motion fields that must be transmitted to the decoder. We also incorporate a rate scalable scheme for coding the motion parameters. This is achieved by constructing a set of quality layers for the motion information, in a manner similar to that used to construct the scalable sample representation. When the motion layers are truncated, the decoder receives a quantized version of the motion parameters used to code the sample data. A linear model is employed to quantify the effects of motion parameter quantization on the reconstructed video distortion. This allows the optimal trade-off between motion and subband sample bit-rates to be determined after the motion and sample data has been compressed. Two schemes are proposed to determine the optimal trade-off between motion and sample bit-rates. The first scheme employs a simple but effective brute force search approach. A second scheme explicitly utilizes the linear model, and yields comparable performance to the brute force scheme, with significantly less computational cost. The high performance of the second scheme also serves to reinforce the validity of the linear model itself. In comparison to existing scalable coding schemes, the proposed video coder achieves significantly higher compression performance, and motion scalability facilitates effcient compression even at low bit-rates. Experimental results show that the proposed scheme is also competitive with state-of-the-art non-scalable video coders.
19

Video annotation tools

Chaudhary, Ahmed 10 October 2008 (has links)
This research deals with annotations in scholarly work. Annotations have been studied by many people. A significant amount of research has shown that instead of implementing domain specific annotation applications a better approach is to develop general purpose annotation toolkits that can be used to create domain specific applications. A video annotation toolkit along with toolkits for searching, retrieving, analyzing and presenting videos can help achieve the broader goal of creating integrated work spaces for scholarly work in humanities research similar to existing environments in such fields as mathematics, engineering, statistics, software development and bioinformatics. This research implements a video annotation toolkit and evaluates it by looking at its usefulness in creating applications for different areas. It was found that many areas of study in the arts and sciences can benefit from a video annotation application tailored to their specific needs and that an annotation toolkit can significantly reduce the time for developing such applications. The toolkit was engineered through successive refinements of prototype applications developed for different application areas. The toolkit design was also guided by a set of features identified by the research community for an ideal general purpose annotation toolkit. This research contributes by combining these two different approaches to toolkit design and construction into a hybrid approach. This approach could be useful for similar or related efforts.
20

Sending Video Over WiMAX for Inter-Vehicle Communications

Lawal, Funmilayo 08 February 2011 (has links)
We present an OPNET model that uses WiMAX technology to send video packets in an advanced inter-vehicle VANET environment. Our work focuses on real-time video streaming. A video model was created based on live traffics trace and then integrated into a WIMAX OPNET model. VANET mobility was modeled with a real world road map and VANET mobility simulators. We integrate an implementable controller over RTP to handle congestion control by setting a framework fit for future road safety development. Different mobility cases are studied and the performance measures such as end-to-end delay, jitter and visual experience are evaluated. Different design considerations are presented to enable designers to effectively build on and develop a realistic video VANET simulation model.

Page generated in 0.0375 seconds