• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1345
  • 397
  • 363
  • 185
  • 104
  • 47
  • 36
  • 31
  • 26
  • 22
  • 22
  • 16
  • 14
  • 13
  • 13
  • Tagged with
  • 3040
  • 532
  • 464
  • 416
  • 409
  • 358
  • 327
  • 276
  • 264
  • 222
  • 219
  • 201
  • 169
  • 161
  • 157
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
751

Multiple Synchronized Video Streams on IP Network

Forsgren, Gustav January 2014 (has links)
Video surveillance today can look very different depending on the objective and on the location where it is used. Some applications need a high image resolution and frame rate to carefully analyze the vision of a camera, while other applications could use a poorer resolution and a lower frame rate to achieve it's goals. The communication between a camera and an observer depends much on the distance between them and on the contents. If the observer is far away the information will reach the observer with delay, and if the medium carrying the information is unreliable the observer has to have this in mind. Lost information might not be acceptable for some applications, and some applications might not need it's information instantly. In this master thesis, IP network communication for an automatic tolling station has been simulated where several video streams from different sources have to be synchronized. The quality of the images and the frame rate are both very important in these types of surveillance, where simultaneously exposed images are processed together. The report includes short descriptions of some networking protocols, and descriptions of two implementations based on the protocols. The implementations were done in C++ using the basic socket API to evaluate the network communication. Two communication methods were used in the implementations, where the idea was to push or to poll images. To simulate the tolling station and create a network with several nodes a number of Raspberry Pis were used to execute the implementations. The report also includes a discussion about how and which video/image compression algorithms the system might benefit of. The results of the network communication evaluation shows that the communication should be done using a pushing implementation rather than a polling implementation. A polling method is needed when the transportation medium is unreliable, but the network components were able to handle the amount of simultaneous sent information very well without control logic in the application.
752

Improving the Capabilities of Swath Bathymetry Sidescan Using Transmit Beamforming and Pulse Coding

Butowski, Marek 30 April 2014 (has links)
Swath bathymetry sidescan (SBS) sonar and the angle-of-arrival processing that underlies these systems has the capability to produce much higher resolution three dimensional imagery and bathymetry than traditional beamformed approaches. However, the performance of these high resolution systems is limited by signal-to-noise ratio (SNR) and they are also susceptible to multipath interference. This thesis explores two methods for increasing SNR and mitigating multipath interference for SBS systems. The first, binary coded pulse transmission and pulse compression is shown to increase the SNR and in turn provide reduced angle variance in SBS systems. The second, transmit beamforming, and more specifically steering and shading, is shown to increase both acoustic power in the water and directivity of the transmitted acoustic radiation. The transmit beamforming benefits are achieved by making use of the 8-element linear angle-of-arrival array typical in SBS sonars, but previously not utilized for transmit. Both simulations and real world SBS experiments are devised and conducted and it is shown that in practice pulse compression increases the SNR, and that transmit beamforming increases backscatter intensity and reduces the intensity of interfering multipaths. The improvement in achievable SNR and the reduction in multipath interference provided by the contributions in this thesis further strengthens the importance of SBS systems and angle-of-arrival based processing, as an alternative to beamforming, in underwater three dimensional imaging and mapping. / Graduate / 2015-04-15 / 0544 / 0547 / mark.butowski@gmail.com
753

Graph Theory for the Discovery of Non-Parametric Audio Objects

Srinivasa, Christopher 28 July 2011 (has links)
A novel framework based on cluster co-occurrence and graph theory for structure discovery is applied to audio to find new types of audio objects which enable the compression of an input signal. These new objects differ from those found in current object coding schemes as their shape is not restricted by any a priori psychoacoustic knowledge. The framework is novel from an application perspective, as it marks the first time that graph theory is applied to audio, and with regards to theoretical developments, as it involves new extensions to the areas of unsupervised learning algorithms and frequent subgraph mining methods. Tests are performed using a corpus of audio files spanning a wide range of sounds. Results show that the framework discovers new types of audio objects which yield average respective overall and relative compression gains of 15.90% and 23.53% while maintaining a very good average audio quality with imperceptible changes.
754

Probing Collective Multi-electron Effects with Few Cycle Laser Pulses

Shiner, Andrew 15 March 2013 (has links)
High Harmonic Generation (HHG) enables the production of bursts of coherent soft x-rays with attosecond pulse duration. This process arrises from the nonlinear interaction between intense infrared laser pulses and an ionizing gas medium. Soft x-ray photons are used for spectroscopy of inner-shell electron correlation and exchange processes, and the availability of attosecond pulse durations will enable these processes to be resolved on their natural time scales. The maximum or cutoff photon energy in HHG increases with both the intensity as well as the wavelength of the driving laser. It is highly desirable to increase the harmonic cutoff as this will allow for the generation of shorter attosecond pulses, as well as HHG spectroscopy of increasingly energetic electronic transitions. While the harmonic cutoff increases with laser wavelength, there is a corresponding decrease in harmonic yield. The first part of this thesis describes the experimental measurement of the wavelength scaling of HHG efficiency, which we report as lambda^(-6.3) in xenon, and lambda^(-6.5) in krypton. To increase the HHG cutoff, we have developed a 1.8 um source, with stable carrier envelope phase and a pulse duration of <2 optical cycles. The 1.8 um wavelength allowed for a significant increase in the harmonic cutoff compared to equivalent 800 nm sources, while still maintaing reasonable harmonic yield. By focusing this source into neon we have produced 400 eV harmonics that extend into the x-ray water window. In addition to providing a source of photons for a secondary target, the HHG spectrum caries the signature of the electronic structure of the generating medium. In krypton we observed a Cooper minimum at 85 eV, showing that photoionization cross sections can be measured with HHG. Measurements in xenon lead to the first clear observation of electron correlation effects during HHG, which manifest as a broad peak in the HHG spectrum centred at 100 eV. This thesis also describes several improvements to the HHG experiment including the development of an ionization detector for measuring laser intensity, as well as an investigation into the role of laser mode quality on HHG phase matching and efficiency.
755

New efficient block-based motion estimation algorithms for video compression and their hardware implementations

Rehan, Mohamed Mohamed 04 February 2010 (has links)
Video compression technology aims at compressing large amount of video data for efficient transmission and storage without significant loss of quality. Most video compression techniques rely on removing temporal data redundancy between frames using motion estimation and motion compensation techniques which are generally very computationally expensive. The objective of the research done in this thesis is to develop new efficient motion estimation techniques that reduce the computational complexity of motion estimation. The thesis presents a new prediction technique referred to as weighted sum block matching (WSBM) which dynamically reduces the computational complexity by limiting the search to a small subset of the search area. Simulation results have shown that adding WSBM to some well-known search algorithms reduces their computational complexity by 6-1.5 without affecting the visual quality of the reconstructed video frames. The thesis also presents two new algorithms based on the simplex optimization method. the simplex based block matching algorithm (SMPLX) and the flexible triangle search (FTS). Both techniques use a triangle that moves inside the search area and checks only positions that lie at its vertices. As a result the computational complexity of the search is reduced since it depends directly on the number of positions checked. The techniques can change the size and orientation of the search triangle during the search. The changes make the search highly flexible and efficient and reduce the number of search positions to be checked compared to those in other search algorithms. The SMPLX uses equations based on the simplex optimization method to compute the new triangle size and orientation. The FTS, on the other hand, was implemented to be more suitable for a digital search grid by using look-up tables and integer computations. The two algorithms were implemented as part of the H.263 and H.264 encoders. Both algorithms were compared to the state of the art motion search algorithms. Experimental results showed that both algorithms can reach sub-optimal solutions while checking fewer search positions compared to other algorithms which results in lower computational complexity as a consequence. Additional research was done to analyze and further improve FTS performance. As a result, various extensions of the FTS have been developed such as the enhanced FTS (EFTS), the half-pixel FTS (HP-FTS). and the predictive FTS (PETS). These extensions were also implemented as part of the H.263 and H.264 encoders. In the EFTS. repeated computations are reduced by caching intermediate results. In addition. the termination condition is modified to avoid premature exit. These modifications reduce the computational complexity of the FTS by up to 4%%. The HP-FTS extended the FTS so that the search can be done at half-pixel resolution instead of full-pixel resolution. The commonly used approach for half-pixel search is based on two separate stages. i.e., full-pixel search followed by half-pixel search. By combining the two stages in HP-FTS. the overall computational complexity can be reduced by an average of 13% without affecting the produced quality or compression ratio. The PETS uses prediction to select the direction of the starting search triangle. Analysis results show that the proper selection of the starting search triangle has great effect on the performance of the FTS. Simulation results show that the PFTS can reduce the computational complexity of the FTS by 7-13%. Finally, hardware designs for the FTS and the full search (FS) algorithms are proposed. The FS was chosen due to its regularity, low control overhead, and suitability for hardware implementation. It uses a high degree of parallelism and pipelining in order to improve the computational efficiency. The FTS requires less computation and thus provides high processing rates. Both designs were implemented, simulated, and verified using VHDL and then synthesized with Xilinx FPGAs. Simulation results have shown that both hard-ware implementations are more efficient than other existing implementations in terms of performance and hardware usage.
756

Matching with mismatches and assorted applications

Percival, Colin January 2006 (has links)
This thesis consists of three parts, each of independent interest, yet tied together by the problem of matching with mismatches. In the first chapter, we present a motivated exposition of a new randomized algorithm for indexed matching with mismatches which, for constant error (substitution) rates, locates a substring of length m within a string of length n faster than existing algorithms by a factor of O(m/ log(n)). The second chapter turns from this theoretical problem to an entirely practical concern: delta compression of executable code. In contrast to earlier work which has either generated very large deltas when applied to executable code, or has generated small deltas by utilizing platform and processor-specific knowledge, we present a naïve approach — that is, one which does not rely upon any external knowledge — which nevertheless constructs deltas of size comparable to those produced by a platformspecific approach. In the course of this construction, we utilize the result from the first chapter, although it is of primary utility only when producing deltas between very similar executables. The third chapter lies between the horn and ivory gates, being both highly interesting from a theoretical viewpoint and of great practical value. Using the algorithm for matching with mismatches from the first chapter, combined with error correcting codes, we give a practical algorithm for “universal” delta compression (often called “feedback-free file synchronization”) which can operate in the presence of multiple indels and a large number of substitutions.
757

Tree encoding of speech signals at low bit rates

Chu, Chung Cheung. January 1986 (has links)
No description available.
758

The use of context in text compression /

Reich, Edwina Helen. January 1984 (has links)
No description available.
759

Vector quantization in residual-encoded linear prediction of speech

Abramson, Mark. January 1983 (has links)
No description available.
760

Control and Optimization of Vapor Compression Cycles Using Recursive Least Squares Estimation

Rani, Avinash 2012 August 1900 (has links)
Vapor compression cycles are the primary method by which refrigeration and air-conditioning systems operate, and thus constitute a significant portion of commercial and residential building energy consumption. This thesis presents a data-driven approach to find the optimal operating conditions of a multi-evaporator system in order to minimize the energy consumption while meeting operational requirements such as constant cooling or constant evaporator outlet temperature. The experimental system used for controller evaluation is a custom built small-scale water chiller with three evaporators; each evaporator services a separate body of water, referred to as a cooling zone. The three evaporators are connected to a single condenser and variable speed compressor, and feature variable water flow and electronic expansion valves. The control problem lies in development of a control architecture that will minimize the energy consumed by the system without prior information about the system in the form of performance maps, or complex mathematical models. The control architecture explored in this thesis relies on the data collected by sensors alone to formulate a function for the power consumption of the system in terms of controlled variables, namely, condenser and evaporator pressures, using recursive least squares estimation. This cost function is then minimized to attain optimal set points for the pressures which are fed to local controllers.

Page generated in 0.0559 seconds