• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1349
  • 397
  • 363
  • 185
  • 104
  • 47
  • 36
  • 31
  • 26
  • 22
  • 22
  • 16
  • 14
  • 13
  • 13
  • Tagged with
  • 3045
  • 532
  • 465
  • 417
  • 410
  • 358
  • 328
  • 276
  • 265
  • 222
  • 219
  • 201
  • 169
  • 161
  • 158
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
611

Gaze-based JPEG compression with varying quality factors

Nilsson, Henrik January 2019 (has links)
Background: With the rise of streaming services such as cloud gaming, a fast internet speed is required for the overall experience. The average internet connection is not suited for the requirements that cloud gaming require. A high quality and frame rate is important for the experience. A solution to this problem would be to have parts where the user is looking at in a image be displayed in higher quality compared to the rest of the image. Objectives: The objective of this thesis is to create a gaze-based lossy image compression algorithm that reduces quality where the user is not looking. By using different radial functions to determine the quality decrease, the perceptual quality is compared to traditional JPEG compression. The storage difference when using a gaze-based lossy image compression is also compared to the JPEG algorithm. Methods: A gaze-based image compression algorithm, which is based on the JPEG algorithm, is developed with DirectX 12. The algorithm uses Tobii eye tracker to get where the user is gazing at the screen. When the gaze-position is changed the algorithm is run again to compress the image. A user study is conducted to the test the perceived quality of this algorithm compared to traditional lossy JPEG image compression. Two different radial functions are tested with various parameters to determine which one is offering the best perceived quality. The algorithm is also tested along with the radial functions on how much of a storage difference there is when using this algorithm compared to traditional JPEG compression. Results: With 11 participants, the results show the gaze-based algorithm is perceptually the same on images that have few objects who are close together. Images with many objects that are spread throughout the image performed worse on the gaze-based algorithm and was less picked compared traditional JPEG compression. The radial functions that cover much of the screen is more often picked compared to other radial functions that have less area of the screen. The storage difference between the gaze-based algorithm compared to traditional JPEG compression was between 60% to 80% less depending on the image. Conclusions: The thesis concludes that there is substantial storage savings that can be made when using a gaze-based image compression compared to traditional JPEG compression. Images with few objects who are close together are perceptually not distinguishable when using the gaze-based algorithm.
612

Mid-IR Laser Absorption Diagnostics for Shock Tube and Rapid Compression Machine Experiments

Nasir, Ehson Fawad 10 1900 (has links)
High-fidelity chemical kinetic models for low-temperature combustion processes require high-fidelity data from fundamental experiments conducted in idealized transient reactors, such as shock tubes and rapid compression machines (RCM). Non-intrusive laser absorption diagnostics, in particular quantum cascade lasers (QCL) in the mid-infrared wavelength region, provide a unique opportunity to obtain quantitative, time-resolved species concentration and temperature from these reactive systems. In this work, three novel laser absorption diagnostics in the mid-infrared wavelength region are presented for three different experimental applications. The first diagnostic was developed for measuring CO2 concentration using an external cavity QCL centered in the ν3 fundamental vibrational band of CO2. Absorption cross-sections were measured in a shock tube, at a fixed wavelength for the R(32) line centered at 2371.42 cm-1 (4.217 µm) over 700 – 2900 K and nominal pressures of 1, 5 and 10 bar. The diagnostic was used to measure rate coefficients for the reaction between carbon monoxide and hydroxyl radical over 700 – 1230 K and 1.2 – 9.8 bar using highly dilute mixtures. The second diagnostic was developed for measuring CO concentration using a pulsed QCL centered at 2046.28 cm-1 (4.887 µm) and an off-axis cavity implemented on the RCM. The duty cycle and pulse repetition rate of the laser were optimized for increased tuning range, high chirp rate and increased line-width to achieve effective laser-cavity coupling. A gain factor of 133 and time resolution of 10 μs were demonstrated. CO concentration-time profiles during the oxidation of highly dilute n-heptane/air mixtures were recorded and compared with chemical kinetic models. This represents the first application of a cavity-enhanced absorption diagnostic in an RCM. Finally, a calibration-free temperature diagnostic based on a pair of pulsed QCLs centered at 2196.66 cm-1 and 2046.28 cm-1 was implemented on the RCM. The down-chirp phenomenon resulted in large spectral tuning (∆v ~ 2.8 cm-1) within a single pulse of each laser at a high pulse repetition frequency (100 kHz). The diagnostic for was used to measure the temperature rise during first-stage ignition of n-pentane at nominal pressures of 10 and 15 bar for the first time.
613

Effect of additional compression features on h.264 surveillance video

Comstedt, Erik January 2017 (has links)
In video surveillance business, a recurring topic of discussion is quality versus data usage. A higher quality allows for more details to be captured at the cost of a higher bit rate, and for cameras monitoring events 24 hours a day, limiting data usage can quickly become a factor to consider. The purpose of this thesis has been to apply additional compression features to a h.264 video steam, and evaluate their effects on the videos overall quality. Using a surveillance camera, recordings of video streams were obtained. These recordings had constant GOP and frame rates. By breaking down one of these videos to an image sequence, it was possible to encode the image sequence into video streams with variable GOP/FPS using the software Ffmpeg. Additionally a user test was performed on these video streams, following the DSCQS standard from the ITU-R recom- mendation. The participants had to subjectively determine the quality of video streams. The results from the these tests showed that the participants did not no- tice any considerable difference in quality between the normal videos and the videos with variable GOP/FPS. Based of these results, the thesis has shown that that additional compression features can be applied to h.264 surveillance streams, without having a substantial effect on the video streams overall quality.
614

Untersuchungen zur Regeneration des Nervus laryngeus recurrens nach Druckschädigung im Göttinger Miniaturschwein / Investigation of the Regeneration Potential of the Recurrent Laryn-geal Nerve (RLN) after Compression Injury, Using Neuromonitoring

Hüller, Markus January 2007 (has links) (PDF)
Die Recurrensparese ist eine der häufigsten Komplikationen in der Schilddrüsenchirurgie. In den letzten Jahren hat sich das intraoperative Neuromonitoring als Methode zur Identifikation und Funktionsprüfung des Nervus recurrens etabliert. Dennoch sind keine prospektiven-randomisierten Studien zum Vergleich von elektrophysiologischer Identifikation des N. recurrens zur alleinigen optischen Identifikation bekannt. Das bedeutet, dass es keinen Beweis für eine Verringerung der Recurrensparese aufgrund des Einsatzes des Neuromonitorings in der Schilddrüsenchirurgie gibt. Beim Neuromonitoring wird der Nerv mit einer Stimulationssonde elektrisch stimuliert und die evozierten Muskelaktionspotentiale des Musculus vocalis akustisch und graphisch dargestellt. Die Ableitung der Potentiale kann entweder über eine in den Musculus vocalis platzierte Nadelelektrode oder über auf dem Beatmungstubus applizierte Oberflächenelektroden erfolgen. Die Interpretation des Neuromonitoringsignals obliegt dem Chirurgen. Geringgradige Veränderungen des Neuromonitoringsignals, die möglicherweise bereits eine Nervenschädigung anzeigen, werden intraoperativ oftmals nicht erkannt oder fehlgedeutet. Die richtige Interpretation dieser könnte dem Operateur jedoch helfen, eine mögliche Recurrensparese zu vermeiden. Bisher gibt es keine Untersuchungen zur Differenzierung verschiedener Signaländerungen beim Neuromonitoring des N. recurrens. Da diese beim Menschen aus ethischen Gründen nicht durchführbar sind, entwickelten wir ein Großtiermodell. Am Beispiel des Göttinger Miniaturschweins (Minipig) sollte der Einfluss einer Druckschädigung auf die Funktion des N. recurrens beurteilt werden. Hauptaugenmerk lag hierbei auf einer frühzeitigen Erkennbarkeit im Neuromonitoringsignal und den Spätfolgen. In einer ersten Operation wurden die N. recurrentes von 15 Minipigs nach Nachweis eines intakten Neuromonitoringsignals einer 2-minütigen Druckschädigung durch eine Operationsklemme ausgesetzt. Anschließend folgte eine 3-minütige Erholungsphase. Dieser Ablauf wurde solange fortgesetzt, bis ein vollständiger Signalverlust im Neuromonitoring nachweisbar war. Nach Ablauf von 6 Monaten wurde in einer 2. Operation nach Wiedereröffnen des Situs ein einmaliges Neuromonitoring durchgeführt. Für unsere Untersuchungen verwendeten wir den Avalanche®XT Thyroid der Firma Dr. Langer Medical. Die im Neuromonitoring gemessenen Parameter waren Amplitude und Latenzzeit, die Stimulation des Nerven erfolgte jeweils über den N. vagus und den N. recurrens. In der 1. Operation beobachteten wir mit Zunahme der Wiederholungen der Druckschädigung eine stetige Abnahme der Amplitude bis zum vollständigen Signalverlust, sowohl bei Stimulation über den N. recurrens, als auch über den N. vagus. Die Latenzzeit blieb stets konstant. In der 2. Operation konnte bei fast allen Minipigs ein positives Neuromonitoringsignal abgeleitet werden, wobei die Amplitudenhöhe bei Stimulation über den N. recurrens nicht signifikant, bei Stimulation über den N. vagus jedoch signifikant kleiner als vor Beginn der Druckschädigung war. Die Latenzzeiten waren sowohl bei Stimulation über den N. recurrens als auch über den N. vagus hoch signifikant länger als vor der initialen Druckschädigung. Hier demonstrieren wir, dass eine akute Druckschädigung des N. recurrens für den Operateur im Neuromonitoringsignal in einer Abnahme der Amplitude bis hin zum Signalverlust erkennbar ist, während die Latenzzeit einen akuten Druckschaden nicht anzeigt. Eine verlängerte Latenzzeit, die zu Beginn einer Schilddrüsenoperation über den N. vagus gemessen wird, könnte Hinweis auf das Vorliegen einer vorausgegangenen Nervenschädigung sein, wie sie z. B. beim Ersteingriff verursacht worden sein kann. Ein vollständiger Signalverlust zu Ende einer Operation muß aber kein Indiz für eine permanente Recurrensparese sein. Dies zeigt sich in der hohen, 93-prozentigen Signalantwort des Nerven nach Ablauf von sechs Monaten. Zu einer restitutio ad integrum kam es dennoch nicht, was die Höhenminderung der Amplitude im Vergleich zu der des ungeschädigten Nerven erkennen lässt. Mit dem intraoperativen kontinuierlichen Neuromonitoring könnte ein akuter Druckschaden, der sich im EMG ausschließlich in einer Amplitudenminderung darstellt, frühzeitig erkennbar und behebbar sein. Wie sich andere Formen der Nervenschädigung im Neuromonitoring darstellen müssen weitere Untersuchungen zeigen. / Introduction: Paralysis of the RLN is a major complication of thyroid surgery, possibly caused by the pressure of forceps or clamps. The aim of this study was to investigate the regeneration potential of RLN after the compression of the nerve, without disrupt-ing its continuity, using neuromonitoring. Methods: In a 1st operation, the RLN and Nervus vagus of adult Goettingen minipigs (GMP, 20-35 kg, n=15) were dissected free, and the neuromonitoring parameters (amplitude, threshold and lag time of sig-nal) were measured. Injury of the RLN was induced using a “bulldog” clamp (2 min clamping, 3 min regeneration phase; 3 repetitions until the neuromonitoring signal disappeared). When the signal was no longer detectable, after the 15 min compres-sion treatment, the operation was finished. The functional studies (see above) were repeated in a 2nd operation 6 months later. Results: (1) After the 1st operation, acute clamping of the RLN led to the reduction of the amplitude of the neuromonitoring sig-nal; the lag time and the threshold of signal remained constant during compression of the RLN. Complete restitution of the signal was observed during the first regeneration phase (3 min). Repeated clamping led to complete disappearance of the signal. (2) During the 2nd operation, i. e., after 6 months of regeneration, the neuromonitoring signals of both RLN and N. vagus were detected in 93% of the GMP. No statistical differences (p=0.17) were noticed between the amplitude of the RLN before the nerve injury (1st operation) and after nerve regeneration (2nd operation). A significant increase in the lag time (LT) (p<0.0005) was shown for both RLN and N. vagus. Con-clusions: The acute compression of RLN can only be detected by observing the am-plitude of the neuromonitoring signal. Restitutio ad integrum can occur after a short clamping period, and thus, RLN injury may be overseen during conventional neuro-monitoring. The regeneration of the RLN after a complete loss of the signal is possi-ble, but one of the important considerations is the preservation of the RLN continuity. The increased LT observed in repeated operations is evidence of previous nerve in-jury, and it probably shows the axonal demyelinisation of RLN.
615

Compression progressive et tatouage conjoint de maillages surfaciques avec attributs de couleur / Progressive compression and joint compression and watermarking of surface mesh with color attributes

Lee, Ho 21 June 2011 (has links)
L’utilisation des modèles 3D, représentés sous forme de maillage, est sans cesse croissante dans de nombreuses applications. Pour une transmission efficace et pour une adaptation à l’hétérogénéité des ressources de ces modèles, des techniques de compression progressive sont généralement utilisées. Afin de protéger le droit d’auteur de ces modèles pendant la transmission, des techniques de tatouage sont également employées. Dans ces travaux de thèse, nous proposons premièrement deux méthodes de compression progressive pour des maillages avec ou sans information de couleurs et nous présentons finalement un système conjoint de compression progressive et de tatouage. Dans une première partie, nous proposons une méthode d’optimisation du compromis débit-distorsion pour des maillages sans attribut de couleur. Pendant le processus de l’encodage, nous adoptons la précision de quantification au nombre d’éléments et à la complexité géométrique pour chaque niveau de détail. Cette adaptation peut s’effectuer de manière optimale en mesurant la distance par rapport au maillage original, ou de façon quasi-optimale en utilisant un modèle théorique pour une optimisation rapide. Les résultats montrent que notre méthode donne des résultats compétitifs par rapport aux méthodes de l’état de l’art. Dans une deuxième partie, nous nous focalisons sur l’optimisation du compromis débit-distorsion pour des maillages possédant l’information de couleur attachée aux sommets. Après avoir proposé deux méthodes de compression pour ce type de maillage, nous présentons une méthode d’optimisation du débit-distorsion qui repose sur l’adaptation de la précision de quantification de la géométrie et de la couleur pour chaque maillage intermédiaire. Cette adaptation peut être effectuée rapidement selon un modèle théorique qui permet d’évaluer le nombre de bits de quantification nécessaire pour chaque maillage intermédiaire. Une métrique est également proposée pour préserver les éléments caractéristiques durant la phase de simplification. Finalement, nous proposons un schéma conjoint de compression progressive et de tatouage. Afin de protéger tous les niveaux de détails, nous insérons le tatouage dans chaque étape du processus d’encodage. Pour cela, à chaque itération de la simplification, nous séparons les sommets du maillage en deux ensembles et nous calculons un histogramme de distribution de normes pour chacun d’entre eux. Ensuite, nous divisons ces histogrammes en plusieurs classes et nous modifions ces histogrammes en décalant les classes pour insérer un bit. Cette technique de tatouage est réversible et permet de restaurer de manière exacte le maillage original en éliminant la déformation induite par l’insertion du tatouage. Nous proposons également une nouvelle méthode de prédiction de la géométrie afin de réduire le surcoût provoqué par l’insertion du tatouage. Les résultats expérimentaux montrent que notre méthode est robuste à diverses attaques géométriques tout en maintenant un bon taux de compression / The use of 3D models, represented as a mesh, is growing in many applications. For efficient transmission and adaptation of these models to the heterogeneity of client devices, progressive compression techniques are generally used. To protect the copyright during the transmission, watermarking techniques are also used. In this thesis, we first propose two progressive compression methods for meshes with or without color information, and we present a joint system of compression and watermarking. In the first part, we propose a method for optimizing the rate-distortion trade-off for meshes without color attribute. During the encoding process, we adopt the quantization precision to the number of elements and geometric complexity. This adaptation can be performed optimally by measuring the distance regarding the original mesh, or can be carried out using a theoretical model for fast optimization. The results show that our method yields competitive results with the state-of-the-art methods. In the second part, we focus on optimizing the rate-distortion performance for meshes with color information attached to mesh vertices. We propose firstly two methods of compression for this type of mesh and then we present a method for optimizing the rate-distortion trade-off based on the adaptation of the quantification precision of both geometry and color for each intermediate mesh. This adaptation can be performed rapidly by a theoretical model that evaluates the required number of quantization bits for each intermediate mesh. A metric is also proposed in order to preserve the feature elements throughout simplification. Finally, we propose a joint scheme of progressive compression and watermarking. To protect all levels of detail, we insert the watermark within each step of the encoding process. More precisely, at each iteration of simplification, we separate vertices into two sets and compute a histogram of distribution of vertex norms for each set. Then, we divide these histograms into several bins and we modify these histograms by shifting bins to insert a bit. This watermarking technique is reversible and can restore exactly the original mesh by eliminating the distortion caused by the insertion of the watermark. We also propose a new prediction method for geometry encoding to reduce the overhead caused by the insertion of the watermark. Experimental results show that our method is robust to various geometric attacks while maintaining a good compression ratio
616

Architecture auto-adaptative pour le transcodage vidéo / Self-Adaptive Architecture for Video Transcoding

Guarisco, Michael 14 November 2011 (has links)
Le transcodage est un élément clé dans la transmission vidéo permettant à une séquence vidéo de passer d'un type de codage à un autre afin de s'adapter au mieux aux capacités de transport d'un canal de transmission. L'intérêt de ce type de traitement est de faire profiter un maximum d'utilisateurs possédant des terminaux variés dont la résolution spatiale, la résolution temporelle affichable, et le type de canal utilisé pour accéder au média varient fortement, et cela à partir d'une seule source de qualité et résolution maximale, stockée sur un serveur, par exemple. Le transcodage est adapté dans les cas où l'on souhaite envoyer une séquence vidéo vers un destinataire et dont le chemin serait constitué de divers canaux de transmission. Nous avons réalisé un transcodeur par requantification ainsi qu'un transcodeur par troncature. Ces deux méthodes ont été comparées et il apparait qu'en termes de qualité d'image l'une ou l'autre de ces méthodes est plus efficace selon le contexte. La suite de nos travaux consiste en l'étude du standard scalable dérivé de H.264 AVC, le standard SVC (Scalable Video Coding). Nous avons souhaité étudier un transcodeur en qualité, mais aussi en résolution spatiale qui permettra de réécrire le flux SVC en un flux AVC décodable par les décodeurs du marché actuel. Cette transposition est réalisée grâce à une architecture reconfigurable permettant de s'adapter aux nombreux types de flux pouvant être conformes au standard SVC d' H.264. L'étude proposée a aboutie à une implémentation partielle d'un transcodeur du type SVC vers AVC. Nous proposons dans cette thèse une description des implémentations de transcodage concernant les formats AVC puis SVC / Transcoding is a key element in the video transmission allows a video to go from one encoding type to another in order to adapt better to the transport capacity of a transmission channel. The advantage of this type of treatment is to make the most of users with various terminals with spatial resolution, temporal resolution displayable, and type of channel used to access the media vary widely, and from that of a single source of quality and maximum resolution, stored on a server, for example. Transcoding is appropriate where you want to send a video to a recipient and whose path would consist of various transmission channels. We realized by a transcoder and a requantization transcoder by truncation. These two methods were compared and it appears that in terms of image quality in either of these methods is more effective depending on the context. Following our work is the study of the standard H.264 AVC scalable derivative of the standard SVC (Scalable Video Coding). We wanted to study as a transcoder, but also in spatial resolution which will rewrite the SVC flow in a stream stroke decodable by decoders on the market today. This mapping is achieved through are configurable architecture to adapt to many types of flow which may conform to standard SVC to H.264. The proposed study has accomplished a partial implementation of a transcoder type SVC to AVC. We propose here a description of the implementations on AVC transcoding and SVC
617

Effects of Repeated Wet-Dry Cycles on Compressive Strength of Fly-Ash Based Recycled Aggregate Geopolymer Concrete (RAGC)

Unknown Date (has links)
Geopolymer concrete (GC) is a sustainable construction material and a great alternative to regular concrete. GC is a zero-cement material made from a combination of aluminate, silicate and an activator to produce a binder-like substance. This investigation focused on the effects of wet and dry cycles on the strength and durability of fly ash-based recycled aggregate geopolymer concrete (RAGC). The wet-dry cycles were performed approximately according to ASTM D559 standards. RAGC specimens with nearly 70% recycled materials (recycled aggregate and fly ash) achieved a compressive strength of approximately 3600 psi, after 7 days of heat curing at 60ºC. Although the recycled aggregate is prone to high water absorption, the compressive strength decreased by only 4% after exposure to 21 wet-dry cycles, compared to control specimens that were not exposed to the same conditions. Accordingly, the RAGC material developed in this study can be considered as a promising environmentally friendly alternative to cement-based regular concrete. / Includes bibliography. / Thesis (M.S.)--Florida Atlantic University, 2018. / FAU Electronic Theses and Dissertations Collection
618

Associative neural networks: properties, learning, and applications.

January 1994 (has links)
by Chi-sing Leung. / Thesis (Ph.D.)--Chinese University of Hong Kong, 1994. / Includes bibliographical references (leaves 236-244). / Chapter 1 --- Introduction --- p.1 / Chapter 1.1 --- Background of Associative Neural Networks --- p.1 / Chapter 1.2 --- A Distributed Encoding Model: Bidirectional Associative Memory --- p.3 / Chapter 1.3 --- A Direct Encoding Model: Kohonen Map --- p.6 / Chapter 1.4 --- Scope and Organization --- p.9 / Chapter 1.5 --- Summary of Publications --- p.13 / Chapter I --- Bidirectional Associative Memory: Statistical Proper- ties and Learning --- p.17 / Chapter 2 --- Introduction to Bidirectional Associative Memory --- p.18 / Chapter 2.1 --- Bidirectional Associative Memory and its Encoding Method --- p.18 / Chapter 2.2 --- Recall Process of BAM --- p.20 / Chapter 2.3 --- Stability of BAM --- p.22 / Chapter 2.4 --- Memory Capacity of BAM --- p.24 / Chapter 2.5 --- Error Correction Capability of BAM --- p.28 / Chapter 2.6 --- Chapter Summary --- p.29 / Chapter 3 --- Memory Capacity and Statistical Dynamics of First Order BAM --- p.31 / Chapter 3.1 --- Introduction --- p.31 / Chapter 3.2 --- Existence of Energy Barrier --- p.34 / Chapter 3.3 --- Memory Capacity from Energy Barrier --- p.44 / Chapter 3.4 --- Confidence Dynamics --- p.49 / Chapter 3.5 --- Numerical Results from the Dynamics --- p.63 / Chapter 3.6 --- Chapter Summary --- p.68 / Chapter 4 --- Stability and Statistical Dynamics of Second order BAM --- p.70 / Chapter 4.1 --- Introduction --- p.70 / Chapter 4.2 --- Second order BAM and its Stability --- p.71 / Chapter 4.3 --- Confidence Dynamics of Second Order BAM --- p.75 / Chapter 4.4 --- Numerical Results --- p.82 / Chapter 4.5 --- Extension to higher order BAM --- p.90 / Chapter 4.6 --- Verification of the conditions of Newman's Lemma --- p.94 / Chapter 4.7 --- Chapter Summary --- p.95 / Chapter 5 --- Enhancement of BAM --- p.97 / Chapter 5.1 --- Background --- p.97 / Chapter 5.2 --- Review on Modifications of BAM --- p.101 / Chapter 5.2.1 --- Change of the encoding method --- p.101 / Chapter 5.2.2 --- Change of the topology --- p.105 / Chapter 5.3 --- Householder Encoding Algorithm --- p.107 / Chapter 5.3.1 --- Construction from Householder Transforms --- p.107 / Chapter 5.3.2 --- Construction from iterative method --- p.109 / Chapter 5.3.3 --- Remarks on HCA --- p.111 / Chapter 5.4 --- Enhanced Householder Encoding Algorithm --- p.112 / Chapter 5.4.1 --- Construction of EHCA --- p.112 / Chapter 5.4.2 --- Remarks on EHCA --- p.114 / Chapter 5.5 --- Bidirectional Learning --- p.115 / Chapter 5.5.1 --- Construction of BL --- p.115 / Chapter 5.5.2 --- The Convergence of BL and the memory capacity of BL --- p.116 / Chapter 5.5.3 --- Remarks on BL --- p.120 / Chapter 5.6 --- Adaptive Ho-Kashyap Bidirectional Learning --- p.121 / Chapter 5.6.1 --- Construction of AHKBL --- p.121 / Chapter 5.6.2 --- Convergent Conditions for AHKBL --- p.124 / Chapter 5.6.3 --- Remarks on AHKBL --- p.125 / Chapter 5.7 --- Computer Simulations --- p.126 / Chapter 5.7.1 --- Memory Capacity --- p.126 / Chapter 5.7.2 --- Error Correction Capability --- p.130 / Chapter 5.7.3 --- Learning Speed --- p.157 / Chapter 5.8 --- Chapter Summary --- p.158 / Chapter 6 --- BAM under Forgetting Learning --- p.160 / Chapter 6.1 --- Introduction --- p.160 / Chapter 6.2 --- Properties of Forgetting Learning --- p.162 / Chapter 6.3 --- Computer Simulations --- p.168 / Chapter 6.4 --- Chapter Summary --- p.168 / Chapter II --- Kohonen Map: Applications in Data compression and Communications --- p.170 / Chapter 7 --- Introduction to Vector Quantization and Kohonen Map --- p.171 / Chapter 7.1 --- Background on Vector quantization --- p.171 / Chapter 7.2 --- Introduction to LBG algorithm --- p.173 / Chapter 7.3 --- Introduction to Kohonen Map --- p.174 / Chapter 7.4 --- Chapter Summary --- p.179 / Chapter 8 --- Applications of Kohonen Map in Data Compression and Communi- cations --- p.181 / Chapter 8.1 --- Use Kohonen Map to design Trellis Coded Vector Quantizer --- p.182 / Chapter 8.1.1 --- Trellis Coded Vector Quantizer --- p.182 / Chapter 8.1.2 --- Trellis Coded Kohonen Map --- p.188 / Chapter 8.1.3 --- Computer Simulations --- p.191 / Chapter 8.2 --- Kohonen MapiCombined Vector Quantization and Modulation --- p.195 / Chapter 8.2.1 --- Impulsive Noise in the received data --- p.195 / Chapter 8.2.2 --- Combined Kohonen Map and Modulation --- p.198 / Chapter 8.2.3 --- Computer Simulations --- p.200 / Chapter 8.3 --- Error Control Scheme for the Transmission of Vector Quantized Data --- p.213 / Chapter 8.3.1 --- Motivation and Background --- p.214 / Chapter 8.3.2 --- Trellis Coded Modulation --- p.216 / Chapter 8.3.3 --- "Combined Vector Quantization, Error Control, and Modulation" --- p.220 / Chapter 8.3.4 --- Computer Simulations --- p.223 / Chapter 8.4 --- Chapter Summary --- p.226 / Chapter 9 --- Conclusion --- p.232 / Bibliography --- p.236
619

Progressive transmission of digital recurrent video.

January 1992 (has links)
by Wai-Wa Wilson Chan. / Thesis (M.Sc.)--Chinese University of Hong Kong, 1992. / Includes bibliographical references (leaves 79-80). / Chapter 1. --- Introduction --- p.1 / Chapter 1.1 --- Problem under study and scope --- p.4 / Chapter 1.2 --- Review of relevant research --- p.6 / Chapter 1.3 --- Objectives --- p.11 / Chapter 2. --- Theory --- p.12 / Chapter 2.1 --- Multi-resolution representation of digital video --- p.13 / Chapter 2.2 --- Performance measure of progressive algorithm --- p.15 / Chapter 2.3 --- Introduction to depth pyramid --- p.35 / Chapter 2.4 --- Introduction to spatial pyramid --- p.37 / Chapter 2.5 --- Introduction to temporal pyramid --- p.42 / Chapter 2.6 --- Proposed algorithm for progressive transmission using depth-spatial-temporal pyramid --- p.46 / Chapter 3. --- Experiment --- p.55 / Chapter 3.1 --- Simulation on depth pyramid --- p.59 / Chapter 3.2 --- Simulation on spatial pyramid --- p.60 / Chapter 3.3 --- Simulation on temporal pyramid --- p.62 / Chapter 3.4 --- Simulation on algorithm for progressive transmission using depth-spatial-temporal pyramid --- p.64 / Chapter 4. --- Conclusions and discussions --- p.74 / Chapter 5. --- Reference and Appendix --- p.79
620

Model- and image-based scene representation.

January 1999 (has links)
Lee Kam Sum. / Thesis (M.Phil.)--Chinese University of Hong Kong, 1999. / Includes bibliographical references (leaves 97-101). / Abstracts in English and Chinese. / Chapter 1 --- Introduction --- p.2 / Chapter 1.1 --- Video representation using panorama mosaic and 3D face model --- p.2 / Chapter 1.2 --- Mosaic-based Video Representation --- p.3 / Chapter 1.3 --- "3D Human Face modeling ," --- p.7 / Chapter 2 --- Background --- p.13 / Chapter 2.1 --- Video Representation using Mosaic Image --- p.13 / Chapter 2.1.1 --- Traditional Video Compression --- p.17 / Chapter 2.2 --- 3D Face model Reconstruction via Multiple Views --- p.19 / Chapter 2.2.1 --- Shape from Silhouettes --- p.19 / Chapter 2.2.2 --- Head and Face Model Reconstruction --- p.22 / Chapter 2.2.3 --- Reconstruction using Generic Model --- p.24 / Chapter 3 --- System Overview --- p.27 / Chapter 3.1 --- Panoramic Video Coding Process --- p.27 / Chapter 3.2 --- 3D Face model Reconstruction Process --- p.28 / Chapter 4 --- Panoramic Video Representation --- p.32 / Chapter 4.1 --- Mosaic Construction --- p.32 / Chapter 4.1.1 --- Cylindrical Panorama Mosaic --- p.32 / Chapter 4.1.2 --- Cylindrical Projection of Mosaic Image --- p.34 / Chapter 4.2 --- Foreground Segmentation and Registration --- p.37 / Chapter 4.2.1 --- Segmentation Using Panorama Mosaic --- p.37 / Chapter 4.2.2 --- Determination of Background by Local Processing --- p.38 / Chapter 4.2.3 --- Segmentation from Frame-Mosaic Comparison --- p.40 / Chapter 4.3 --- Compression of the Foreground Regions --- p.44 / Chapter 4.3.1 --- MPEG-1 Compression --- p.44 / Chapter 4.3.2 --- MPEG Coding Method: I/P/B Frames --- p.45 / Chapter 4.4 --- Video Stream Reconstruction --- p.48 / Chapter 5 --- Three Dimensional Human Face modeling --- p.52 / Chapter 5.1 --- Capturing Images for 3D Face modeling --- p.53 / Chapter 5.2 --- Shape Estimation and Model Deformation --- p.55 / Chapter 5.2.1 --- Head Shape Estimation and Model deformation --- p.55 / Chapter 5.2.2 --- Face organs shaping and positioning --- p.58 / Chapter 5.2.3 --- Reconstruction with both intrinsic and extrinsic parameters --- p.59 / Chapter 5.2.4 --- Reconstruction with only Intrinsic Parameter --- p.63 / Chapter 5.2.5 --- Essential Matrix --- p.65 / Chapter 5.2.6 --- Estimation of Essential Matrix --- p.66 / Chapter 5.2.7 --- Recovery of 3D Coordinates from Essential Matrix --- p.67 / Chapter 5.3 --- Integration of Head Shape and Face Organs --- p.70 / Chapter 5.4 --- Texture-Mapping --- p.71 / Chapter 6 --- Experimental Result & Discussion --- p.74 / Chapter 6.1 --- Panoramic Video Representation --- p.74 / Chapter 6.1.1 --- Compression Improvement from Foreground Extraction --- p.76 / Chapter 6.1.2 --- Video Compression Performance --- p.78 / Chapter 6.1.3 --- Quality of Reconstructed Video Sequence --- p.80 / Chapter 6.2 --- 3D Face model Reconstruction --- p.91 / Chapter 7 --- Conclusion and Future Direction --- p.94 / Bibliography --- p.101

Page generated in 0.1327 seconds