• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1345
  • 397
  • 363
  • 185
  • 104
  • 47
  • 36
  • 31
  • 26
  • 22
  • 22
  • 16
  • 14
  • 13
  • 13
  • Tagged with
  • 3040
  • 532
  • 464
  • 416
  • 409
  • 358
  • 327
  • 276
  • 264
  • 222
  • 219
  • 201
  • 169
  • 161
  • 157
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
331

Low power scan testing and test data compression

Lee, Jinkyu, January 1900 (has links) (PDF)
Thesis (Ph. D.)--University of Texas at Austin, 2006. / Vita. Includes bibliographical references.
332

Joint source-channel distortion modeling for image and video communication

Sabir, Muhammad Farooq, January 1900 (has links) (PDF)
Thesis (Ph. D.)--University of Texas at Austin, 2006. / Vita. Includes bibliographical references.
333

Subjective Evaluation of an Edge-based Depth Image Compression Scheme

Li, Yun, Sjöström, Mårten, Jennehag, Ulf, Olsson, Roger, Sylvain, Tourancheau January 2013 (has links)
Multi-view three-dimensional television requires many views, which may be synthesized from two-dimensional images with accompanying pixel-wise depth information. This depth image, which typically consists of smooth areas and sharp transitions at object borders, must be consistent with the acquired scene in order for synthesized views to be of good quality. We have previously proposed a depth image coding scheme that preserves significant edges and encodes smooth areas between these. An objective evaluation considering the structural similarity (SSIM) index for synthesized views demonstrated an advantage to the proposed scheme over the High Efficiency Video Coding (HEVC) intra mode in certain cases. However, there were some discrepancies between the outcomes from the objective evaluation and from our visual inspection, which motivated this study of subjective tests. The test was conducted according to ITU-R BT.500-13 recommendation with Stimulus-comparison methods. The results from the subjective test showed that the proposed scheme performs slightly better than HEVC with statistical significance at majority of the tested bit rates for the given contents.
334

Profile extrusion of wood plastic cellular composites and formulation evaluation using compression molding

Islam, Mohammad Rubyet 01 May 2010 (has links)
Wood Plastic Composites (WPCs) have experienced a healthy growth during the last decade. However, improvement in properties is necessary to increase their utility for structural applications. The toughness of WPCs can be improved by creating a fine cellular structure while reducing the density. Extrusion processing is one of the most economical methods for profile formation. For our study, rectangular profiles were extruded using a twin-screw extrusion system with different grades of HDPE and with varying wood fibre and lubricant contents together with maleated polyethylene (MAPE) coupling agent to investigate their effects on WPC processing and mechanical properties. Work has been done to redesign the extrusion system setup to achieve smoother and stronger profiles. A guiding shaper, submerged in the water, has been designed to guide the material directly through water immediately after exiting the die; instead of passing it through a water cooled vacuum calibrator and then through water. In this way a skin was formed quickly that facilitated the production of smoother profiles. Later on chemical blowing agent (CBA) was used to generate cellular structure in the profile by the same extrusion system. CBA contents die temperatures, drawdown ratios (DDR) and wood fibre contents (WF) were varied for optimization of mechanical properties and morphology. Cell morphology and fibre alignment was characterized by a scanning electron microscope (SEM). A new compression molding system was developed to help in quick evaluation of different material formulations. This system forces the materials to flow in one direction to achieve higher net alignment of fibres during sample preparation, which is the case during profile extrusion. Operation parameters were optimized and improvements in WPC properties were observed compared to samples prepared by conventional hot press and profile extrusion. / UOIT
335

DCT Implementation on GPU

Tokdemir, Serpil 04 December 2006 (has links)
There has been a great progress in the field of graphics processors. Since, there is no rise in the speed of the normal CPU processors; Designers are coming up with multi-core, parallel processors. Because of their popularity in parallel processing, GPUs are becoming more and more attractive for many applications. With the increasing demand in utilizing GPUs, there is a great need to develop operating systems that handle the GPU to full capacity. GPUs offer a very efficient environment for many image processing applications. This thesis explores the processing power of GPUs for digital image compression using Discrete cosine transform.
336

Improving web server efficiency on commodity hardware

Beltrán Querol, Vicenç 03 October 2008 (has links)
El ràpid creixement de la Web requereix una gran quantitat de recursos computacionals que han de ser utilitzats eficientment. Avui en dia, els servidors basats en hardware estendard son les plataformes preferides per executar els servidors web, ja que són les plataformes amb millor relació rendiment/cost. El treball presentat en aquesta tesi esta dirigit a millorar la eficàcia en la gestió de recursos dels servidors web actuals. Per assolir els objectius d'aquesta tesis s'ha caracteritzat el funcionament dels servidors web en diverses entorns representatius, per tal de identificar el problemes i coll d'ampolla que limiten el rendiment del servidor web. Amb l'estudi dels servidors web s'ha identificat dos problemes principals que disminueixen l'eficiència dels servidors web en la utilització dels recursos hardware disponibles. El primer problema identificat és la evolució del protocol HTTP per incorporar connexions persistents i seguretat, que disminueix el rendiment e incrementa la complexitat de configuració dels servidors web. El segon problema és la naturalesa de algunes aplicacions web, les quals estan limitades per la memòria física o l'ample de banda amb el disc, que impedeix la correcta utilització dels recursos presents en les maquines multiprocessadors. Per solucionar aquests dos problemes dels servidors web hem proposat dues tècniques. En primer lloc, l'arquitectura hibrida, una evolució de l'arquitectura multi-threaded que es pot implementar fàcilment el els servidor web actuals i que millora notablement la gestió de les connexions i redueix la complexitat de configuració de tot el sistema. En segon lloc, hem implementat en el kernel del sistema operatiu Linux un comprensió de memòria principal per millorar el rendiment de les aplicacions que tenen la memòria com ha coll d'ampolla, millorant així la utilització dels recursos disponibles. Els resultats d'aquesta tesis estan avalats per una avaluació experimental exhaustiva que ha provat la efectivitat i viabilitat de les nostres propostes. Cal destacar que l'arquitectura de servidor web hybrida proposada en aquesta tesis ha estat implementada recentment per coneguts servidors web com és el cas de Apache, Tomcat i Glassfish. / The unstoppable growth of the World Wide Web requires a huge amount of computational resources that must be used efficiently. Nowadays, commodity hardware is the preferred platform to run web server systems because it is the most cost-effective solution. The work presented in this thesis aims to improve the efficiency of current web server systems, allowing the web servers to make the most of hardware resources. To this end, we first characterize current web server system and identify the problems that hinder web servers from providing an efficient utilization of resources. From the study of web servers in a wide range of situations and environments, we have identified two main issues that prevents web servers systems from efficiently using current hardware resources. The first is the extension of the HTTP protocol to include connection persistence and security, which dramatically impacts the performance and configuration complexity of traditional multi-threaded web servers. The second is the memory-bounded or disk-bounded nature of some web workloads that prevents the full utilization of the abundant CPU resources available on current commodity hardware. We propose two novel techniques to overcome the main problems with current web server systems. Firstly, we propose a Hybrid web serverarchitecture which can be easily implemented in any multi-threaded web server to improve CPU utilization so as to provide better management of client connections. And secondly, we describe a main memory compression technique implemented in the Linux operating system that makes optimum use of current multiprocessor's hardware, in order to improve the performance of memory bound web applications. The thesis is supported by an exhaustive experimental evaluation that proves the effectiveness and feasibility of our proposals for current systems. It is worth noting that the main concepts behind the Hybrid architecture have recently been implemented in popular web servers like Apache, Tomcat and Glassfish.
337

Indexing Compressed Text

He, Meng January 2003 (has links)
As a result of the rapid growth of the volume of electronic data, text compression and indexing techniques are receiving more and more attention. These two issues are usually treated as independent problems, but approaches of combining them have recently attracted the attention of researchers. In this thesis, we review and test some of the more effective and some of the more theoretically interesting techniques. Various compression and indexing techniques are presented, and we also present two compressed text indices. Based on these techniques, we implement an compressed full-text index, so that compressed texts can be indexed to support fast queries without decompressing the whole texts. The experiments show that our index is compact and supports fast search.
338

Model Based Diagnosis of an Air Source Heat Pump / Modellbaserad Diagnos av en Luftvärmepump

Alfredsson, Sandra January 2011 (has links)
The purpose of a heat pump is to control the temperature of an enclosed space. This is done by using heat exchange with a heat source, for example water, air, or ground. In the air source heat pump that has been studied during this master thesis, a refrigerant exchanges heat with the outdoor air and with a water distribution system. The heat pump is controlled through the circuit containing the refrigerant and it is therefore crucial that this circuit is functional. To ensure this, a diagnosis system has been created, to be able to detect and isolate sensor errors. The diagnosis system is based on mathematical models of the refrigerant circuit with its main components: a compressor, an expansion valve, a plate heat exchanger, an air heat exchanger, and a four-way valve. Data has been collected from temperature- and pressure sensors on an air source heat pump. The data has then been divided into data for model estimation and data for model validation. The models are used to create test quantities, which in turn are used by a diagnosis algorithm to determine whether an error has occurred or not. There are nine temperature sensors and two pressure sensors on the studied air source heat pump. Four fault modes have been investigated for each sensor: Stuck, Offset, Short circuit and Open circuit. The designed diagnosis system is able to detect all of the investigated error modes and isolate 40 out of 44 single errors. However, there is room for improvement by constructing more test quantities to detect errors and decouple more fault modes. To further develop the diagnosis system, the existing models can be improved and new models can be created.
339

Indexing Compressed Text

He, Meng January 2003 (has links)
As a result of the rapid growth of the volume of electronic data, text compression and indexing techniques are receiving more and more attention. These two issues are usually treated as independent problems, but approaches of combining them have recently attracted the attention of researchers. In this thesis, we review and test some of the more effective and some of the more theoretically interesting techniques. Various compression and indexing techniques are presented, and we also present two compressed text indices. Based on these techniques, we implement an compressed full-text index, so that compressed texts can be indexed to support fast queries without decompressing the whole texts. The experiments show that our index is compact and supports fast search.
340

Lattice Compression of Polynomial Matrices

Li, Chao January 2007 (has links)
This thesis investigates lattice compression of polynomial matrices over finite fields. For an m x n matrix, the goal of lattice compression is to find an m x (m+k) matrix, for some relatively small k, such that the lattice span of two matrices are equivalent. For any m x n polynomial matrix with degree bound d, it can be compressed by multiplying by a random n x (m+k) matrix B with degree bound s. In this thesis, we prove that there is a positive probability that L(A)=L(AB) with k(s+1)=\Theta(\log(md)). This is shown to hold even when s=0 (i.e., where B is a matrix of constants). We also design a competitive probabilistic lattice compression algorithm of the Las Vegas type that has a positive probability of success on any input and requires O~(nm^{\theta-1}B(d)) field operations.

Page generated in 0.0196 seconds