• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1353
  • 397
  • 363
  • 185
  • 104
  • 47
  • 36
  • 32
  • 26
  • 22
  • 22
  • 16
  • 14
  • 13
  • 13
  • Tagged with
  • 3051
  • 534
  • 465
  • 418
  • 410
  • 359
  • 329
  • 276
  • 266
  • 222
  • 219
  • 201
  • 169
  • 161
  • 159
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
61

COMPARISON OF TWO LOADING SURFACE PREPARATION METHODS ON RAT VERTEBRAL BODIES FOR COMPRESSION TESTING

Schumacher, YVONNE 01 October 2013 (has links)
Osteoporosis is a disease characterized by bone loss affecting 10% of the US population over 50 years old. The spine is one critical area affected by the disease. The effectiveness of experimental treatments can be tested on an ovariectomized rat osteoporosis model. As a result, lumbar vertebral bodies are often mechanically tested in uniaxial compression in order to determine whether or not the mechanical properties of the bone in ovariectomized rats improve with treatment. The irregular shape of rat vertebral bodies requires some specimen preparation to create two parallel loading surfaces for uniaxial compression testing. Two specimen preparation methods are reported in the current body of literature. One cuts the cranial and caudal surfaces to make them parallel to each other. The other cuts the caudal surface and uses bone cement to create a flat loading surface at the cranial end. In this thesis a total of twenty rat vertebral bodies were tested. Ten were prepared with a cut specimen preparation method and ten with an embedding method. Each specimen was tested in uniaxial compression and was microCT scanned before and after testing. Eleven parameters were calculated from the mechanical testing data and compared between the two groups using Student’s t-tests. The specimens were also categorized into six failure modes and locations observed in the microCT images. The embedded specimens showed a lower stiffness (p = 0.026), greater yield displacement (p = 0.007) and apparent strain at failure (p = 0.050). These differences were largely attributed to the embedded specimens being 1 mm taller than the cut specimens. The shorter size of the cut specimens affected the mechanical parameters. The cut specimens were easier to prepare and were less sensitive to end effect failures. The embedded specimens kept the endplate, which distributes the load from the intervertebral disk through the vertebral body, intact. In addition, the embedded specimens exhibited two failure modes, endplate failure and failure at the center of the vertebral body, observed in ex vivo human lumbar vertebral body testing, which suggests the interaction of the vertebral body with the endplate is an important factor in vertebral body failure in uniaxial compression testing. In conclusion, neither preparation method showed an overwhelming advantage over the other, and experimental parameters should be considered when choosing a loading surface preparation method. / Thesis (Master, Mechanical and Materials Engineering) -- Queen's University, 2013-09-30 11:58:32.794
62

Performance and computational complexity optimization techniques in configurable video coding system

Kwon, Nyeongkyu. 10 April 2008 (has links)
No description available.
63

Online Deduplication for Distributed Databases

Xu, Lianghong 01 September 2016 (has links)
The rate of data growth outpaces the decline of hardware costs, and there has been an ever-increasing demand in reducing the storage and network overhead for online database management systems (DBMSs). The most widely used approach for data reduction in DBMSs is blocklevel compression. Although this method is simple and effective, it fails to address redundancy across blocks and therefore leaves significant room for improvement for many applications. This dissertation proposes a systematic approach, termed similaritybased deduplication, which reduces the amount of data stored on disk and transmitted over the network beyond the benefits provided by traditional compression schemes. To demonstrate the approach, we designed and implemented dbDedup, a lightweight record-level similaritybased deduplication engine for online DBMSs. The design of dbDedup exploits key observations we find in database workloads, including small item sizes, temporal locality, and the incremental nature of record updates. The proposed approach differs from traditional chunk-based deduplication approaches in that, instead of finding identical chunks anywhere else in the data corpus, similarity-based deduplication identifies a single similar data-item and performs differential compression to remove the redundant parts for greater savings. To achieve high efficiency, dbDedup introduces novel encoding, caching and similarity selection techniques that significantly mitigate the deduplication overhead with minimal loss of compression ratio. For evaluation, we integrated dbDedup into the storage and replication components of a distributed NoSQL DBMS and analyzed its properties using four real datasets. Our results show that dbDedup achieves up to 37⇥ reduction in the storage size and replication traffic of the database on its own and up to 61⇥ reduction when paired with the DBMS’s block-level compression. dbDedup provides both benefits with negligible effect on DBMS throughput or client latency (average and tail).
64

Experimental study of axial compressive behavior of a hyper-elastic annular seal constrained in a pipe

Shaha, Rony 14 September 2016 (has links)
The compressive behavior of an annular rubber seal constrained in a pipe and the interaction between the pipe and the seal was studied experimentally using a specially designed test fixture that allowed the concentric alignment of the seal within the pipe and its axial compression using an electro-hydraulic Instron load frame. The hoop strain introduced in the pipe wall, due to the constraint of lateral expansion of the seal, displayed a parabolic distribution with a maximum value at the mid-height of the seal similar to the parabolic shape of the lateral expansion of the seal. The magnitude of the pipe strain increased with the friction coefficient of the interface between the seal and the compression rings, strain rate, and shape factor for a constant gap between the seal and the pipe wall. The relationship between the apparent compressive modulus and the shape factor (beyond experimental range) was studied using FEA. / October 2016
65

Compression multi-vues de flux autostéréoscopiques / Multi-view compression of autostereoscopic streams

Battin, Benjamin 04 July 2012 (has links)
Le développement de la "3D" dans l'industrie cinématographique ainsi que la mise sur le marché de consoles telles que la Nintendo 3DS et d'écrans stéréoscopiques utilisant des lunettes (actives ou passives) accrédite la démocratisation du relief dans notre rapport à l'image. L'une des dernières technologies en matière de restitution relief est l'écran auto-stéréoscopique : il permet l'affichage simultané de plusieurs vues d'une même scène (généralement entre 2 ou 9) et ne nécessite pas le port de lunettes pour percevoir le relief. Les séquences auto-stéréoscopiques représentent un volume conséquent de données (lié au nombre de points de vue) qui est encore accentué par le fait que la technologie associée aux écrans est en constante évolution. En effet, les écrans actuels proposent des résolutions et des fréquences d'affichage de plus en plus élevées avec, notamment, l'arrivée prochaine sur le marché du standard UHDTV. Ces différents facteurs tendent à produire des volumes de données toujours plus conséquents qui doivent être compressés pour permettre leur transmission sur des réseaux ou pour faciliter leur stockage. Le travail de thèse, qui a été mené dans le cadre du projet "Cam-Relief", a pour objectif le développement de solutions logicielles dédiées à la compression multi-vues de séquences auto-stéréoscopiques. Chacune de nos contributions vise à répondre à un des besoins spécifiques suivant : la compression temps réel des séquences issues de nos systèmes d'acquisition, la compression sans perte de séquences destinées à la post-production ainsi qu'une alternative au standard de compression multi-vues actuel : H.264/MultiView-Coding. Nous présentons dans ce mémoire trois méthodes de compression multi-vues, chacune répondant aux trois besoins respectifs énoncés ci-dessus. La première, appelée MICA (Multirvew Image Compression Algortihm), est un algorithme de compression multi-vues temps réel qui exploite la corrélation inter-vues présente au sein des séquences auto-stéréoscopiques en utilisant le principe d'images de différence. Ces images de différence vont aussi nous permettre de mettre en avant les zones de l'images soumises à la parallaxe et ainsi permettre de les préserver le plus possible d'importantes distorsions. La deuxième contribution, nommé Multiview-LS (Lossless), est une adaptation au cas multi-vues de l'algorithme JPEG-LS. En modifiant la structure du schéma de prédiction de JPEG-LS, notre algorithme permet l'exploitation des corrélations temporelles et inter-vues spécifiques aux séquences auto-stéréoscopiques. Le troisième schéma de compression, enfin, propose un algorithme de compression basé sur l'approche LDI (Layered-Depth Image). La génération de celui-ci est basée sur une approche innovante utilisant des cartes de disparités entières. Nous proposons deux schémas dédiés à la compression de l'information chromatique du LDI : un basé sur l’utilisation de la DCT et l'autre basé sur l'utilisation de la DWT. L'information de disparité est quant à elle codée à l'aide d'un algorithme de compression sans perte. / The development of the "3D" in the film industry, the marketing of consoles suche as the Nintendo 3DS ans screens using stereoscopic glasses (active or passive) accredits the democratization of the 3DTV. One of the latest technologies associated to 3D resides in auto-stereoscopic displays which allow the simultaneous display of multiple views from a scene (usually between 2 and 9) and do not require glasses to perceive the depth impression. These auto-stereoscopic sequences represent a significant volume of data (related to the number of views) which is further accentuated by the fact that the technology associated with screens is constanly changing. Indeed, the current screens offer higher and higher resolutions and display frequencies, notably with the arrival of the UHDTV standard on the market. These factors tend to produce high amounts of data which have to be compressed in order to allow transmission over networks or for easy storage. The thesis, which was conducted under the project "CamRelief", aims to develop software solutions for the muli-view compression of these auto-stereoscopic sequences. Each of our contributions addresses a special need from the following : real-time compression of multi-view sequences produced but our acquisition systems, lossless compression of multi-view sequences for post-production and an alternative to the multi-view compression standard : H.264/MultiView-Coding. In this paper, we present three multi-view compression methods which meet the three respective needs outlined above. The first method, called MICA (Multiview Image Compression Algortihm) is a real-time multi-view compression algorithm which exploits the inter-view correlation by using the concept of much as possible from major distoritions. The second contribution, called Multiview-LS (Lossless), is a multi-view adaptation of the JPEG-LS algortihm. By changing the structure of the JPEG-LS prediction scheme, our algorithm allows the exploitation of temporal and interview correlations specific to auto-stéréoscopic sequences. Finally, the third compression scheme is based ont the LDI (Layered-Depth Image) concept. The LDI generation is based on an innovative approach using integer-valued disparity maps. We propose two schemes dedicated to the chromatic LDI information compression : one based on the use of the DCT and the other based on the use of the DWT. The disparity information is encoded using a lossless compression algorithm.
66

Vylepšení víceproudé komprese / Improvements of multistream compression

Unger, Lukáš January 2010 (has links)
Multistream compression is based on a transformation significantly different from the ones commonly used for data compression. This Master thesis concerns with the use of said method for the compression of text files written in natural language. The main goal of the thesis is to find suitable preprocessing methods for text transformation, which would enable the Multistream compression to achieve better compression ratios, together with searching for the best methods for coding of individual streams. The practical part of the thesis deals with the implementation of several transformation algorithms into the XBW project.
67

Caractérisation des propriétés mécaniques des géomatériaux par technique de micro indentation / Characterization of the mechanical properties of the geomaterials by technique of microindentation

Ibrahim, Nidal 28 October 2008 (has links)
La technologie de micro indentation est un des moyens de caractérisation (à partir de petits échantillons) qui s'est imposé ces derniers temps dans différents domaines (pharmaceutique, génie civil, industrie pétrolière etc.). Il répond à un certain nombre d'exigences en matière de solution au problème d'échantillonnage. Cette thèse est consacrée à la caractérisation des propriétés mécanique des géomatériaux, et spécialement pour les roches pétrolières comme l'argilite, le grès, la craie ... qui ont été utilisées pour les différentes études expérimentales menées au cours de la thèse. Après avoir présenté la méthode de dépouillement du test d'indentation pour un milieu isotrope, nous avons développé une méthode semi-analytique basée sur la fonction de Green pour caractériser le milieu isotrope transverse en déterminant les cinq paramètres élastique de ce milieu. L'influence des différentes sollicitations (mécaniques, thermiques, hydriques) sur les propriétés mécaniques des roches a été étudiée en utilisant la technologie de micro indentation avec la méthode de dépouillement isotrope transverse. Nous avons essayé de caractériser les paramètres de rupture (C et f) à l'aide du test d'indentation et d'un test de micro compression simple (MCS) effectué par la même machine d'indentation. Par l'essai d'indentation et une méthode d'analyse inverse, nous avons identifié les paramètres d'une loi de comportement élastoplastique (Drucker Prager). En l'absence d'une solution directe du problème d'indentation en régime plastique, nous avons eu recours à une modélisation numérique par un code de calcule élément finis (ABAQUS) pour déterminer la courbe d'indentation calculée. Cette détermination s'est révélée tout à fait probante et a été de plus validée par une simulation d'essais de compression triaxiale sur le même matériau. / The technology of micro indentation is one of the techniques ofmateriaJ characterization (by using small specimens) in various fields (mechanical engineering, civil engineering, oil industry, and pharmaceutical industry). Its main advantage lies in a certain number of practical requirements as regards the solution to the problem of small specimens. The present study is devoted the characterization of the mechanical properties of geomaterials, especially rocks involved in petroleum engineering. After having presented the methodology of the indentation test for isotropic rocks, we developed a semi-analytical method based on the use of Green function to characterize transverse isotropic rocks (five elastic parameters of these rocks). The influence of the various loadings (mechanical, thermal, hydrous) on the rock mechanics properties was studied by using the technology of micro indentation and the methodology proposed for isotropic transverse were used. Moreover, we characterize the failure parameters (C and f) by a combined approach of the indentation test and a test of micro compression (MCS) carried out the indentation device. Finally, we use inverse analysis in order to identify the parameters of a Drucker Prager mode!. ln the absence of a direct solution of the problem of indentation (in plastic regime), we had recourse to a numerical modelling by a finite element code (ABAQUS) to determine the calculated curve of indentation. This determination appeared completely convincing and moreover was validated by a simulation of triaxial compression tests on the same material
68

Cinétique d'auto-inflammation de carburants gazeux à haute pression : étude expérimentale et de modélisation / Gaseous fuel autoignition kinetic at high pressure : experimental and modelling study

Yu, Yi 18 December 2012 (has links)
Les délais auto-inflammations des divers mélanges de carburants (méthane, gaz naturel, gaz de synthèse) en phase gazeuse aux températures basses et intermédiaires (800 à 1010 K) et hautes pressions (0,5 à 2,5 MPa) ont été mesurés dans la Machine à Compression Rapide (MCR) de l’Université de Lille 1. Différentes quantités d’hydrogène ou d’additifs représentant une composition-type d’EGR (CO, CO2, H2O) ont été ajoutées au gaz naturel pour étudier leur effet sur les délais d’auto-inflammation. L’effet des conditions opératoires (la pression et la température) et l’effet de la richesse des mélanges ont été également étudiés. Le mécanisme GDF-kin® 4 développé par GDF-SUEZ a été utilisé pour modéliser les résultats expérimentaux. Ce mécanisme a été amélioré pour reproduire les délais d'auto-inflammation dans nos conditions d'étude. Le nouveau mécanisme a également été validé à l'aide de nombreux résultats expérimentaux de la littérature. / The ignition delay of various gaseous fuel (methane, natural gas, syngas) at low and intermediate temperatures (800 to 1010K) and high pressure (0,5 to 2,5 MPa) were measured in the rapid compression machine of the University of Lille 1. Different amounts of hydrogen or additives representing a composition type EGR (CO, CO2, H2O) were added to natural gas in ordre to study their influence on the ignition delay. The effect of operating conditions (pressure and temperature) and the equivalence radio of the fuel were also studied. The mechanism GDF-kin® 4 developed by GDF SUEZ has been used to model the experimental results. This mechanism has been improved to reproduce the ignition delay in our conditon. The new mechanism has also been validated with various experimental results from the literatures.
69

Optimal Parsing for dictionary text compression / Parsing optimal pour la compression du texte par dictionnaire

Langiu, Alessio 03 April 2012 (has links)
Les algorithmes de compression de données basés sur les dictionnaires incluent une stratégie de parsing pour transformer le texte d'entrée en une séquence de phrases du dictionnaire. Etant donné un texte, un tel processus n'est généralement pas unique et, pour comprimer, il est logique de trouver, parmi les parsing possibles, celui qui minimise le plus le taux de compression finale. C'est ce qu'on appelle le problème du parsing. Un parsing optimal est une stratégie de parsing ou un algorithme de parsing qui résout ce problème en tenant compte de toutes les contraintes d'un algorithme de compression ou d'une classe d'algorithmes de compression homogène. Les contraintes de l'algorithme de compression sont, par exemple, le dictionnaire lui-même, c'est-à-dire l'ensemble dynamique de phrases disponibles, et combien une phrase pèse sur le texte comprimé, c'est-à-dire quelle est la longueur du mot de code qui représente la phrase, appelée aussi le coût du codage d'un pointeur de dictionnaire. En plus de 30 ans d'histoire de la compression de texte par dictionnaire, une grande quantité d'algorithmes, de variantes et d'extensions sont apparus. Cependant, alors qu'une telle approche de la compression du texte est devenue l'une des plus appréciées et utilisées dans presque tous les processus de stockage et de communication, seuls quelques algorithmes de parsing optimaux ont été présentés. Beaucoup d'algorithmes de compression manquent encore d'optimalité pour leur parsing, ou du moins de la preuve de l'optimalité. Cela se produit parce qu'il n'y a pas un modèle général pour le problème de parsing qui inclut tous les algorithmes par dictionnaire et parce que
les parsing optimaux existants travaillent sous des hypothèses trop restrictives. Ce travail focalise sur le problème de parsing et présente à la fois un modèle général pour la compression des textes basée sur les dictionnaires appelé la théorie Dictionary-Symbolwise et un algorithme général de parsing qui a été prouvé être optimal sous certaines hypothèses réalistes. Cet algorithme est appelé Dictionary-Symbolwise Flexible Parsing et couvre pratiquement tous les cas des algorithmes de compression de texte basés sur dictionnaire ainsi que la grande classe de leurs variantes où le texte est décomposé en une séquence de symboles et de phrases du dictionnaire. Dans ce travail, nous avons aussi considéré le cas d'un mélange libre d'un compresseur par dictionnaire et d'un compresseur symbolwise. Notre Dictionary-Symbolwise Flexible Parsing couvre également ce cas-ci. Nous avons bien un algorithme de parsing optimal dans le cas de compression Dictionary-Symbolwise où le dictionnaire est fermé par préfixe et le coût d'encodage des pointeurs du dictionnaire est variable. Le compresseur symbolwise est un compresseur symbolwise classique qui fonctionne en temps linéaire, comme le sont de nombreux codeurs communs à longueur variable. Notre algorithme fonctionne sous l'hypothèse qu'un graphe spécial, qui sera décrit par la suite, soit bien défini. Même si cette condition n'est pas remplie, il est possible d'utiliser la même méthode pour obtenir des parsing presque optimaux. Dans le détail, lorsque le dictionnaire est comme LZ78, nous montrons comment mettre en œuvre notre algorithme en temps linéaire. Lorsque le dictionnaire est comme LZ77 notre algorithme peut être mis en œuvre en temps O (n log 
n) où n est le longueur du texte. Dans les deux cas, la complexité en espace est O (n). Même si l'objectif principal de ce travail est de nature théorique, des résultats expérimentaux seront présentés pour souligner certains effets pratiques de l'optimalité du parsing sur les performances de compression et quelques résultats expérimentaux plus détaillés sont mis dans une annexe appropriée / Dictionary-based compression algorithms include a parsing strategy to transform the input text into a sequence of dictionary phrases. Given a text, such process usually is not unique and, for compression purpose, it makes sense to find one of the possible parsing that minimizes the final compression ratio. This is the parsing problem. An optimal parsing is a parsing strategy or a parsing algorithm that solve the parsing problem taking account of all the constraints of a compression algorithm or of a class of homogeneous compression algorithms. Compression algorithm constrains are, for instance, the dictionary itself, i.e. the dynamic set of available phrases, and how much a phrase weight on the compressed text, i.e. the length of the codeword that represent such phrase also denoted as the cost of a dictionary pointer encoding. In more than 30th years of history of dictionary based text compression, while plenty of algorithms, variants and extensions appeared and while such approach to text compression become one of the most appreciated and utilized in almost all the storage and communication process, only few optimal parsing algorithms was presented. Many compression algorithms still leaks optimality of their parsing or, at least, proof of optimality. This happens because there is not a general model of the parsing problem that includes all the dictionary based algorithms and because the existing optimal parsings work under too restrictive hypothesis. This work focus on the parsing problem and presents both a general model for dictionary based text compression called Dictionary-Symbolwise theory and a general parsing algorithm that is proved to be optimal under some realistic hypothesis. This algorithm is called Dictionary-Symbolwise Flexible Parsing and it covers almost all the cases of dictionary based text compression algorithms together with the large class of their variants where the text is decomposed in a sequence of symbols and dictionary phrases.In this work we further consider the case of a free mixture of a dictionary compressor and a symbolwise compressor. Our Dictionary-Symbolwise Flexible Parsing covers also this case. We have indeed an optimal parsing algorithm in the case of dictionary-symbolwise compression where the dictionary is prefix closed and the cost of encoding dictionary pointer is variable. The symbolwise compressor is any classical one that works in linear time, as many common variable-length encoders do. Our algorithm works under the assumption that a special graph that will be described in the following, is well defined. Even if this condition is not satisfied it is possible to use the same method to obtain almost optimal parses. In detail, when the dictionary is LZ78-like, we show how to implement our algorithm in linear time. When the dictionary is LZ77-like our algorithm can be implemented in time O(n log n). Both have O(n) space complexity. Even if the main aim of this work is of theoretical nature, some experimental results will be introduced to underline some practical effects of the parsing optimality in compression performance and some more detailed experiments are hosted in a devoted appendix
70

Variable block size motion estimation hardware for video encoders.

January 2007 (has links)
Li, Man Ho. / Thesis submitted in: November 2006. / Thesis (M.Phil.)--Chinese University of Hong Kong, 2007. / Includes bibliographical references (leaves 137-143). / Abstracts in English and Chinese. / Abstract --- p.i / Acknowledgement --- p.iv / Chapter 1 --- Introduction --- p.1 / Chapter 1.1 --- Motivation --- p.3 / Chapter 1.2 --- The objectives of this thesis --- p.4 / Chapter 1.3 --- Contributions --- p.5 / Chapter 1.4 --- Thesis structure --- p.6 / Chapter 2 --- Digital video compression --- p.8 / Chapter 2.1 --- Introduction --- p.8 / Chapter 2.2 --- Fundamentals of lossy video compression --- p.9 / Chapter 2.2.1 --- Video compression and human visual systems --- p.10 / Chapter 2.2.2 --- Representation of color --- p.10 / Chapter 2.2.3 --- Sampling methods - frames and fields --- p.11 / Chapter 2.2.4 --- Compression methods --- p.11 / Chapter 2.2.5 --- Motion estimation --- p.12 / Chapter 2.2.6 --- Motion compensation --- p.13 / Chapter 2.2.7 --- Transform --- p.13 / Chapter 2.2.8 --- Quantization --- p.14 / Chapter 2.2.9 --- Entropy Encoding --- p.14 / Chapter 2.2.10 --- Intra-prediction unit --- p.14 / Chapter 2.2.11 --- Deblocking filter --- p.15 / Chapter 2.2.12 --- Complexity analysis of on different com- pression stages --- p.16 / Chapter 2.3 --- Motion estimation process --- p.16 / Chapter 2.3.1 --- Block-based matching method --- p.16 / Chapter 2.3.2 --- Motion estimation procedure --- p.18 / Chapter 2.3.3 --- Matching Criteria --- p.19 / Chapter 2.3.4 --- Motion vectors --- p.21 / Chapter 2.3.5 --- Quality judgment --- p.22 / Chapter 2.4 --- Block-based matching algorithms for motion estimation --- p.23 / Chapter 2.4.1 --- Full search (FS) --- p.23 / Chapter 2.4.2 --- Three-step search (TSS) --- p.24 / Chapter 2.4.3 --- Two-dimensional Logarithmic Search Algorithm (2D-log search) --- p.25 / Chapter 2.4.4 --- Diamond Search (DS) --- p.25 / Chapter 2.4.5 --- Fast full search (FFS) --- p.26 / Chapter 2.5 --- Complexity analysis of motion estimation --- p.27 / Chapter 2.5.1 --- Different searching algorithms --- p.28 / Chapter 2.5.2 --- Fixed-block size motion estimation --- p.28 / Chapter 2.5.3 --- Variable block size motion estimation --- p.29 / Chapter 2.5.4 --- Sub-pixel motion estimation --- p.30 / Chapter 2.5.5 --- Multi-reference frame motion estimation . --- p.30 / Chapter 2.6 --- Picture quality analysis --- p.31 / Chapter 2.7 --- Summary --- p.32 / Chapter 3 --- Arithmetic for video encoding --- p.33 / Chapter 3.1 --- Introduction --- p.33 / Chapter 3.2 --- Number systems --- p.34 / Chapter 3.2.1 --- Non-redundant Number System --- p.34 / Chapter 3.2.2 --- Redundant number system --- p.36 / Chapter 3.3 --- Addition/subtraction algorithm --- p.38 / Chapter 3.3.1 --- Non-redundant number addition --- p.39 / Chapter 3.3.2 --- Carry-save number addition --- p.39 / Chapter 3.3.3 --- Signed-digit number addition --- p.40 / Chapter 3.4 --- Bit-serial algorithms --- p.42 / Chapter 3.4.1 --- Least-significant-bit (LSB) first mode --- p.42 / Chapter 3.4.2 --- Most-significant-bit (MSB) first mode --- p.43 / Chapter 3.5 --- Absolute difference algorithm --- p.44 / Chapter 3.5.1 --- Non-redundant algorithm for absolute difference --- p.44 / Chapter 3.5.2 --- Redundant algorithm for absolute difference --- p.45 / Chapter 3.6 --- Multi-operand addition algorithm --- p.47 / Chapter 3.6.1 --- Bit-parallel non-redundant adder tree implementation --- p.47 / Chapter 3.6.2 --- Bit-parallel carry-save adder tree implementation --- p.49 / Chapter 3.6.3 --- Bit serial signed digit adder tree implementation --- p.49 / Chapter 3.7 --- Comparison algorithms --- p.50 / Chapter 3.7.1 --- Non-redundant comparison algorithm --- p.51 / Chapter 3.7.2 --- Signed-digit comparison algorithm --- p.52 / Chapter 3.8 --- Summary --- p.53 / Chapter 4 --- VLSI architectures for video encoding --- p.54 / Chapter 4.1 --- Introduction --- p.54 / Chapter 4.2 --- Implementation platform - (FPGA) --- p.55 / Chapter 4.2.1 --- Basic FPGA architecture --- p.55 / Chapter 4.2.2 --- DSP blocks in FPGA device --- p.56 / Chapter 4.2.3 --- Advantages employing FPGA --- p.57 / Chapter 4.2.4 --- Commercial FPGA Device --- p.58 / Chapter 4.3 --- Top level architecture of motion estimation processor --- p.59 / Chapter 4.4 --- Bit-parallel architectures for motion estimation --- p.60 / Chapter 4.4.1 --- Systolic arrays --- p.60 / Chapter 4.4.2 --- Mapping of a motion estimation algorithm onto systolic array --- p.61 / Chapter 4.4.3 --- 1-D systolic array architecture (LA-ID) --- p.63 / Chapter 4.4.4 --- 2-D systolic array architecture (LA-2D) --- p.64 / Chapter 4.4.5 --- 1-D Tree architecture (GA-1D) --- p.64 / Chapter 4.4.6 --- 2-D Tree architecture (GA-2D) --- p.65 / Chapter 4.4.7 --- Variable block size support in bit-parallel architectures --- p.66 / Chapter 4.5 --- Bit-serial motion estimation architecture --- p.68 / Chapter 4.5.1 --- Data Processing Direction --- p.68 / Chapter 4.5.2 --- Algorithm mapping and dataflow design . --- p.68 / Chapter 4.5.3 --- Early termination scheme --- p.69 / Chapter 4.5.4 --- Top-level architecture --- p.70 / Chapter 4.5.5 --- Non redundant positive number to signed digit conversion --- p.71 / Chapter 4.5.6 --- Signed-digit adder tree --- p.73 / Chapter 4.5.7 --- SAD merger --- p.74 / Chapter 4.5.8 --- Signed-digit comparator --- p.75 / Chapter 4.5.9 --- Early termination controller --- p.76 / Chapter 4.5.10 --- Data scheduling and timeline --- p.80 / Chapter 4.6 --- Decision metric in different architectural types . . --- p.80 / Chapter 4.6.1 --- Throughput --- p.81 / Chapter 4.6.2 --- Memory bandwidth --- p.83 / Chapter 4.6.3 --- Silicon area occupied and power consump- tion --- p.83 / Chapter 4.7 --- Architecture selection for different applications . . --- p.84 / Chapter 4.7.1 --- CIF and QCIF resolution --- p.84 / Chapter 4.7.2 --- SDTV resolution --- p.85 / Chapter 4.7.3 --- HDTV resolution --- p.85 / Chapter 4.8 --- Summary --- p.86 / Chapter 5 --- Results and comparison --- p.87 / Chapter 5.1 --- Introduction --- p.87 / Chapter 5.2 --- Implementation details --- p.87 / Chapter 5.2.1 --- Bit-parallel 1-D systolic array --- p.88 / Chapter 5.2.2 --- Bit-parallel 2-D systolic array --- p.89 / Chapter 5.2.3 --- Bit-parallel Tree architecture --- p.90 / Chapter 5.2.4 --- MSB-first bit-serial design --- p.91 / Chapter 5.3 --- Comparison between motion estimation architectures --- p.93 / Chapter 5.3.1 --- Throughput and latency --- p.93 / Chapter 5.3.2 --- Occupied resources --- p.94 / Chapter 5.3.3 --- Memory bandwidth --- p.95 / Chapter 5.3.4 --- Motion estimation algorithm --- p.95 / Chapter 5.3.5 --- Power consumption --- p.97 / Chapter 5.4 --- Comparison to ASIC and FPGA architectures in past literature --- p.99 / Chapter 5.5 --- Summary --- p.101 / Chapter 6 --- Conclusion --- p.102 / Chapter 6.1 --- Summary --- p.102 / Chapter 6.1.1 --- Algorithmic optimizations --- p.102 / Chapter 6.1.2 --- Architecture and arithmetic optimizations --- p.103 / Chapter 6.1.3 --- Implementation on a FPGA platform . . . --- p.104 / Chapter 6.2 --- Future work --- p.106 / Chapter A --- VHDL Sources --- p.108 / Chapter A.1 --- Online Full Adder --- p.108 / Chapter A.2 --- Online Signed Digit Full Adder --- p.109 / Chapter A.3 --- Online Pull Adder Tree --- p.110 / Chapter A.4 --- SAD merger --- p.112 / Chapter A.5 --- Signed digit adder tree stage (top) --- p.116 / Chapter A.6 --- Absolute element --- p.118 / Chapter A.7 --- Absolute stage (top) --- p.119 / Chapter A.8 --- Online comparator element --- p.120 / Chapter A.9 --- Comparator stage (top) --- p.122 / Chapter A.10 --- MSB-first motion estimation processor --- p.134 / Bibliography --- p.137

Page generated in 0.0645 seconds