Spelling suggestions: "subject:"compensate""
11 |
Motion compensation-scalable video codingΑθανασόπουλος, Διονύσιος 17 September 2007 (has links)
Αντικείμενο της διπλωματικής εργασίας αποτελεί η κλιμακοθετήσιμη κωδικοποίηση βίντεο (scalable video coding) με χρήση του μετασχηματισμού wavelet. Η κλιμακοθετήσιμη κωδικοποίηση βίντεο αποτελεί ένα πλαίσιο εργασίας, όπου από μια ενιαία συμπιεσμένη ακολουθία βίντεο μπορούν να προκύψουν αναπαραστάσεις του βίντεο με διαφορετική ποιότητα, ανάλυση και ρυθμό πλαισίων. Η κλιμακοθετησιμότητα του βίντεο αποτελεί σημαντική ιδιότητα ενός συστήματος στις μέρες μας, όπου το video-streaming και η επικοινωνία με βίντεο γίνεται μέσω μη αξιόπιστων μέσων διάδοσης και μεταξύ τερματικών με διαφορετικές δυνατότητες
Στην εργασία αυτή αρχικά μελετάται ο μετασχηματισμός wavelet, ο οποίος αποτελεί το βασικό εργαλείο για την κλιμακοθετήσιμη κωδικοποίηση τόσο εικόνων όσο και ακολουθιών βίντεο. Στην συνέχεια, αναλύουμε την ιδέα της ανάλυσης πολλαπλής διακριτικής ικανότητας (multiresolution analysis) και την υλοποίηση του μετασχηματισμού wavelet με χρήση του σχήματος ανόρθωσης (lifting scheme), η οποία προκάλεσε νέο ενδιαφέρον στο χώρο της κλιμακοθετήσιμης κωδικοποίησης βίντεο. Τα κλιμακοθετήσιμα συστήματα κωδικοποίησης βίντεο διακρίνονται σε δύο κατηγορίες: σε αυτά που εφαρμόζουν το μετασχηματισμό wavelet πρώτα στο πεδίο του χρόνου και έπειτα στο πεδίο του χώρου και σε αυτά που εφαρμόζουν το μετασχηματισμό wavelet πρώτα στο πεδίο του χώρου και έπειτα στο πεδίο του χρόνου. Εμείς εστιάzουμε στη πρώτη κατηγορία και αναλύουμε τη διαδικάσια κλιμακοθετήσιμης κωδικοποίησης/αποκωδικοποίησης καθώς και τα επιμέρους κομμάτια από τα οποία αποτελείται. Τέλος, εξετάζουμε τον τρόπο με τον οποίο διάφορες παράμετρoι επηρεάζουν την απόδοση ενός συστήματος κλιμακοθετήσιμης κωδικοποίησης βίντεο και παρουσιάζουμε τα αποτελέσματα από τις πειραματικές μετρήσεις. Βασιζόμενοι στα πειραματικά αποτελέσματα προτείνουμε έναν προσαρμοστικό τρόπο επιλογής των παραμέτρων με σκοπό τη βελτίωση της απόδοσης και συγχρόνως τη μείωση της πολυπλοκότητας. / In this master thesis we examine the scalable video coding based on the wavelet transform. Scalable video coding refers to a compression framework where content representations with different quality, resolution, and frame-rate can be extracted from parts of one compressed bitstream. Scalable video coding based on motion-compensated spatiotemporal wavelet decompositions is becoming increasingly popular, as it provides coding performance competitive with state-of-the-art coders, while trying to accommodate varying network bandwidths and different receiver capabilities (frame-rate, display size, CPU, etc.) and to provide solutions for network congestion or video server design.
In this master thesis we investigate the wavelet transform, the multiresolution analysis and the lifting scheme. Then, we focus on the scalable video coding/decoding. There exist two different architectures of scalable video coding. The first one performs the wavelet transform firstly on the temporal direction and then performs the spatial wavelet decomposition. The other architecture performs firstly the spatial wavelet transform and then the temporal decomposition. We focus on the first architecture, also known as t+2D scalable coding systems.
Several coding parameters affect the performance of the scalable video coding scheme such as the number of temporal levels and the interpolation filter used for subpixel accuracy. We have conducted extensive experiments in order to test the influence of these parameters. The influence of these parameters proves to be dependent on the video content. Thus, we present an adaptive way of choosing the value of these parameters based on the video content. Experimental results show that the proposed method not only significantly improves the performance but reduces the complexity of the coding procedure.
|
12 |
ADVANCEMENTS IN TRANSMISSION LINE FAULT LOCATIONKang, Ning 01 January 2010 (has links)
In modern power transmission systems, the double-circuit line structure is increasingly adopted. However, due to the mutual coupling between the parallel lines it is quite challenging to design accurate fault location algorithms. Moreover, the widely used series compensator and its protective device introduce harmonics and non-linearities to the transmission lines, which make fault location more difficult. To tackle these problems, this dissertation is committed to developing advanced fault location methods for double-circuit and series-compensated transmission lines.
Algorithms utilizing sparse measurements for pinpointing the location of short-circuit faults on double-circuit lines are proposed. By decomposing the original network into three sequence networks, the bus impedance matrix for each network with the addition of the fictitious fault bus can be formulated. It is a function of the unknown fault location. With the augmented bus impedance matrices the sequence voltage change during the fault at any bus can be expressed in terms of the corresponding sequence fault current and the transfer impedance between the fault bus and the measured bus. Resorting to VCR the superimposed sequence current at any branch can be expressed with respect to the pertaining sequence fault current and transfer impedance terms. Obeying boundary conditions of different fault types, four different classes of fault location algorithms utilizing either voltage phasors, or phase voltage magnitudes, or current phasors, or phase current magnitudes are derived. The distinguishing charactristic of the proposed method is that the data measurements need not stem from the faulted section itself. Quite satisfactory results have been obtained using EMTP simulation studies.
A fault location algorithm for series-compensated transmission lines that employs two-terminal unsynchronized voltage and current measurements has been implemented. For the distinct cases that the fault occurs either on the left or on the right side of the series compensator, two subroutines are developed. In additon, the procedure to identify the correct fault location estimate is described in this work. Simulation studies carried out with Matlab SimPowerSystems show that the fault location results are very accurate.
|
13 |
Stability and Convergence of High Order Numerical Methods for Nonlinear Hyperbolic Conservation LawsMehmetoglu, Orhan 2012 August 1900 (has links)
Recently there have been numerous advances in the development of numerical algorithms to solve conservation laws. Even though the analytical theory (existence-uniqueness) is complete in the case of scalar conservation laws, there are many numerically robust methods for which the question of convergence and error estimates are still open. Usually high order schemes are constructed to be Total Variation Diminishing (TVD) which only guarantees convergence of such schemes to a weak solution. The standard approach in proving convergence to the entropy solution is to try to establish cell entropy inequalities. However, this typically requires additional non-homogeneous limitations on the numerical method, which reduces the modified scheme to first order when the mesh is refined. There are only a few results on the convergence which do not impose such limitations and all of them assume some smoothness on the initial data in addition to L^infinity bound.
The Nessyahu-Tadmor (NT) scheme is a typical example of a high order scheme. It is a simple yet robust second order non-oscillatory scheme, which relies on a non-linear piecewise linear reconstruction. A standard reconstruction choice is based on the so-called minmod limiter which gives a maximum principle for the scheme. Unfortunately, this limiter reduces the reconstruction to first order at local extrema. Numerical evidence suggests that this limitation is not necessary. By using MAPR-like limiters, one can allow local nonlinear reconstructions which do not reduce to first order at local extrema. However, use of such limiters requires a new approach when trying to prove a maximum principle for the scheme. It is also well known that the NT scheme does not satisfy the so-called strict cell entropy inequalities, which is the main difficulty in proving convergence to the entropy solution.
In this work, the NT scheme with MAPR-like limiters is considered. A maximum principle result for a conservation law with any Lipschitz flux and also with any k-monotone flux is proven. Using this result it is also proven that in the case of strictly convex flux, the NT scheme with a properly selected MAPR-like limiter satisfies an one-sided Lipschitz stability estimate. As a result, convergence to the unique entropy solution when the initial data satisfies the so-called one-sided Lipschitz condition is obtained. Finally, compensated compactness arguments are employed to prove that for any bounded initial data, the NT scheme based on a MAPR-like limiter converges strongly on compact sets to the unique entropy solution of the conservation law with a strictly convex flux.
|
14 |
Detecting Malingering in Compensated Low Back Pain Patients: An Analog StudyGrewe, Jennifer R. 01 May 2010 (has links)
Given the prevalence and cost of low back pain, particularly among workers' compensation patients, it is advantageous to understand how various psychological constructs may be related to prolonged disability and failure to return to work. Malingering is a psychological construct that is clearly relevant for worker compensation populations and is a construct that is well suited for experimental control within an analog study. Malingering is the intentional exaggeration of physical or psychological symptoms that are motivated by external incentives such as time away from work. The ability to detect malingering in such a population with psychological assessments is unclear. An analog study was conducted in which we instructed college students to portray themselves as injured workers who received a back injury that required them to be off work while they recovered. Students were then told that they would be seeing a psychologist who would attempt to ascertain their abilities to return to work via the MMPI-2. Students were then randomly instructed to respond to the MMPI-2 in three different ways: a control condition was instructed to respond as if they suffered a workplace back injury that resulted in significant pain; a subtle fake-bad condition who received the control instruction plus were informed they did not enjoy their work and their back injury allowed them to enjoy personal and family time more; and a fake-bad condition that received the control instruction plus were asked to deliberately portray themselves as experiencing physical symptoms severe enough to keep them off work longer. Currently, no assessment of malingering exists within a compensated low back pain population. The purpose of this study was to determine if the MMPI-2 can be used to differentially identify "patients" who are instructed to report symptoms veridically versus "patients" instructed to consciously feign and magnify symptoms in an effort to avoid returning to work. Malingering and non-malingering patients' scores on the MMPI-2 validity and clinical scales were subjected to a cluster analysis to determine if a malingering profile could be accurately identified. A 5-cluster validity solution and 4-cluster clinical (both with K correction) solution were accepted. Substantially lower scores on L and K, elevated scores on F on the 5-cluster validity solutions, distinguished the "malingering" profile. The 4-cluster clinical solution was characterized by elevated scores on the clinical scales of hypochondriasis, depression, paranoia, and schizophrenia, which distinguished the "malingering" profile. The results indicate that the MMPI-2 could be useful in detecting malingering in compensated back pain patients. Results are discussed in the context of pain studies.
|
15 |
Manipulation of Light with Transformation OpticsYan, Wei January 2010 (has links)
Transformation optics, a recently booming area, provides people a new approach to design optical devices for manipulating light. With transformation optics, a lot of novel optical devices are proposed, such as invisibility cloaks, optical wormholes, optical black holes, illusion devices. The present thesis is devoted to investigate transformation optics for manipulating light. Firstly, an introduction to transformation optics is given. This part includes: (1) introducing differential geometry as the mathematical preparation; (2) expressing Maxwell’s equations in an arbitrary coordinate system and introducing the concept of transformation media as the foundation stone of transformation optics; (3) discussing light from the geometry perspective as the essence of transformation optics; (4) showing how to use transformation optics to design optical devices. For our works on invisibility cloaks, we analyze the properties of arbitrary shaped invisibility cloaks, and confirm their invisibility abilities. The geometrical perturbations to cylindrical and spherical shaped cloaks are analyzed in detail. We show that the cylindrical cloak is more sensitive to the perturbation than a spherical cloak. By imposing a PEC (PMC) layer at the interior boundary of the cylindrical cloak shell for TM (TE) wave, the sensitivity can be reduced dramatically. A simplified non-magnetic cylindrical cloak is also designed. We show that the dominant zeroth order scattering term can be eliminated by employing an air gap between the cloak and the cloaked region. We propose a compensated bilayer by a folding coordinate transformation based on transformation optics. It is pointed out that complementary media, perfect negative index lens and perfect bilayer lens made of indefinite media are well unified under the scope of the transformed compensated bilayer. We demonstrate the applications of the compensated bilayer, such as perfect imaging and optical illusion. Arbitrary shaped compensated bilayers are also analyzed. Nihility media known as the media with ε =μ= 0, are generalized from transformation optics as transformation media derived from volumeless geometrical elements. The practical constructions of nihility media by metamaterials are discussed. The eigen fields in the nihility media are derived. The interactions between an external incident wave and a slab of nihility media in the free space background are analyzed. A new type of transformation media called α media is proposed for manipulating light. Light rays in the α media have a simple displacement or rotation relationship with those in another media (seed media). Such relationship is named α relationship. The α media can be designed and simplified to a certain class of diagonal anisotropic media, which are related to certain isotropic media by the α relationship. Several optical devices based on the α transformation media are designed. Invisibility cloaks obtained from the coordinate transformation approach are revisited from a different perspective. / QC 20101102
|
16 |
HIGH FREQUENCY (1000 HZ) TYMPANOMETRY AND ACOUSTIC REFLEX FINDINGS IN NEWBORN AND 6-WEEK-OLD INFANTSRafidah Mazlan Unknown Date (has links)
Tympanometry and acoustic stapedial reflex (ASR) are routinely used in audiology clinics to assess the functional integrity of the eardrum and middle ear system in humans. Conventional tympanometry (which delivers a probe tone of 226 Hz into the ear canal and measures the mobility of the eardrum as the air pressure in the ear canal is varied) and acoustic reflex testing are effective in detecting middle ear pathologies in children and adults. However, the clinical application of these two tests to infants younger than 7 months has major limitations. In recent years, high frequency tympanometry (HFT) with a probe tone of 1000 Hz has been trialled successfully in young infants (< 7 months) and research on ASRs as they apply to this age group is continuing. Although preliminary HFT data for this population are emerging, there has been no detailed study that describes the effect of age on HFT and ASR results, no clear guideline on ways to interpret the HFT results, and no investigation to measure the feasibility and reliability of the ASR findings. For these reasons, systematic investigation into the use of HFT and ASR measures for evaluating the middle ear function of young infants is warranted. This thesis aimed to: (i) investigate the feasibility of obtaining HFT and ASR findings from newborn and 6-week-old infants, and study the characteristics of the immittance findings in these two age groups; (ii) investigate methods within HFT to measure the middle ear admittance of newborn babies; (iii) establish normative HFT data from healthy newborn babies using the new component compensation method; (iv) examine the test-retest reliability of the ASR test in healthy neonates; and (v) investigate the test-retest reliability of the ASR test in 6-week-old infants. The aims of the thesis were met through five studies. In study one (Chapter 2), a pilot study was conducted to examine the feasibility of performing HFT and ASR in 42 healthy infants and study the characteristics of the immittance findings obtained from these infants using a longitudinal study design. In this pilot study, all infants were tested at birth and then re-tested approximately 6 weeks after the first test. This study confirmed the feasibility of obtaining valid immittance findings from healthy young infants. Most importantly, the findings of this pilot study revealed that the mean values of the majority of HFT parameters and acoustic stapedial reflex threshold (ASRT) obtained at 6 weeks were significantly greater than those obtained at birth, indicating the need to have separate sets of normative data for both tests for newborn and 6-week-old infants. In study 2 (Chapter 3), three different methods to measure middle ear admittance (often described as peak compensated static admittance) in 36 healthy neonates were compared. The three methods were the traditional baseline compensation method (compensated for the susceptance component at 200 daPa pressure) and two new component compensated methods (compensated for both the susceptance and conductance components at 200 daPa and -400 daPa). The results showed that the mean middle ear admittances obtained by compensating for the two components of admittance at a pressure of 200 daPa (YCC200) and -400 daPa (YCC-400) were significantly greater than that using the traditional baseline compensation method (YBC). The higher mean admittance results obtained using the new component compensated methods suggests that the two new methods have the potential to better separate normal from abnormal admittance results. The test-retest reliability of YBC, YCC200 and YCC-400 was investigated, with the result that a lower test-retest reliability was obtained for YCC-400 than for the other two measures. It was, therefore, concluded that the component compensation method compensated at 200 daPa may serve as an alternative method for estimating middle ear admittance, especially in the context of assessing neonates using HFT. In study 3 (Chapter 4), normative data were gathered using the new component compensation method (compensated at 200 daPa) on a group of 157 healthy newborn babies. In addition to the component compensated static admittance (YCC), normative data showing the 90 % ranges for tympanometric peak pressure, admittance at 200 daPa, uncompensated peak admittance, and traditional baseline compensated static admittance (YBC) were established in this study. No gender effect was found on any of the tympanometric measures. In study 4 (Chapter 5), the use of ASR to evaluate middle ear function in neonates was studied. The feasibility of obtaining ipsilateral ASR from neonates by stimulating their ears with a 2 kHz tone and broadband noise (BBN) was demonstrated. ASRs were elicited from 91.3% of 219 full-term normal neonates, while the remaining 8.7% of neonates who had flat tympanograms and no transient evoked otoacoustic emissions did not exhibit ASRs. Good test-retest reliability was demonstrated in the ASRT obtained using both the 2 kHz and BBN stimulus; there was no significant difference between test and retest conditions and intra-correlation coefficients of 0.83 for the 2 kHz tone and 0.76 for the BBN stimulus. In the last study (Chapter 6), the test-retest reliability of ASRT obtained from 70 6-week-old infants was investigated. The methodology described in Chapter 5 was followed. No significant difference in ASRT between test and retest conditions was found for the 2 kHz tone (mean ASRT = 67.3 dB HL versus 67.1 dB HL) and BBN stimulus (mean ASRT = 80.9 dB HL versus 81.6 dB HL). Good test-retest reliability of ASRT with intra-correlation coefficients of 0.78 was found for both the 2 kHz tone and the BBN stimulus. In essence, through achieving the aforementioned aims, the current research program was able to enhance the minimal literature available concerning the use of HFT and ASR testing in young infants. Ultimately, the findings presented in this thesis will inform clinicians of the recent developments in HFT and ASR testing, and assist them in evaluating the middle ear function of young infants with accuracy and confidence.
|
17 |
Band Gap - přesná napěťová reference / Band Gap - accurate voltage referenceBubla, Jiří January 2009 (has links)
This diploma thesis is specialized on a design of a high accuracy voltage reference Bandgap. A very low temperature coefficient and output voltage approx. 1,205V are the main features of this circuit. The paper contains a derivation of the Bandgap principle, examples of realizations of the circuits and methods of compensation temperature dependence and manufacture process, design of Brokaw and Gilbert reference, design of a testchip and measurement results.
|
18 |
On the use of compensated machining paths to alleviate three-lobed deformationsIlie, Andreea January 2020 (has links)
During turning, the neck of a workpiece is three-lobed deformed due to the clamping pressure on the gripping jaws. While the workpiece is deformed, the machining tool will cut along a preprogrammed circular path. After removal from the chuck, the material returns to its original shape thus deforming the circular path. This means the processed part is no longer circular, as it should be. Typically, this type of problem is usually solved by changing the fixtures (jaws) or adjusting the clamping pressure. This thesis takes a different approach that is based on creating a compensated toolpath that follows the workpiece deformation. This can be a much faster and cheaper way to solve the problem and the technique can be applied to other cylindrical workpieces. The main results of this thesis are a methodology to address the deformation problem as well as suggested changes to the manufacturing process for the workpiece. This involves in particular a change from turning to milling in the last manufacturing stage involving fine machining.
|
19 |
Fully Scalable Video Coding Using Redundant-Wavelet Multihypothesis and Motion-Compensated Temporal FilteringWang, Yonghui 13 December 2003 (has links)
In this dissertation, a fully scalable video coding system is proposed. This system achieves full temporal, resolution, and fidelity scalability by combining mesh-based motion-compensated temporal filtering, multihypothesis motion compensation, and an embedded 3D wavelet-coefficient coder. The first major contribution of this work is the introduction of the redundant-wavelet multihypothesis paradigm into motion-compensated temporal filtering, which is achieved by deploying temporal filtering in the domain of a spatially redundant wavelet transform. A regular triangle mesh is used to track motion between frames, and an affine transform between mesh triangles implements motion compensation within a lifting-based temporal transform. Experimental results reveal that the incorporation of redundant-wavelet multihypothesis into mesh-based motion-compensated temporal filtering significantly improves the rate-distortion performance of the scalable coder. The second major contribution is the introduction of a sliding-window implementation of motion-compensated temporal filtering such that video sequences of arbitrarily length may be temporally filtered using a finite-length frame buffer without suffering from severe degradation at buffer boundaries. Finally, as a third major contribution, a novel 3D coder is designed for the coding of the 3D volume of coefficients resulting from the redundant-wavelet based temporal filtering. This coder employs an explicit estimate of the probability of coefficient significance to drive a nonadaptive arithmetic coder, resulting in a simple software implementation. Additionally, the coder offers the possibility of a high degree of vectorization particularly well suited to the data-parallel capabilities of modern general-purpose processors or customized hardware. Results show that the proposed coder yields nearly the same rate-distortion performance as a more complicated coefficient coder considered to be state of the art.
|
20 |
Development of compensated immersion 3D optical profiler based on interferometry / Développement d'un profilomètre optique 3D à immersion compensée basé sur l'interférométrieMukhtar, Husneni 29 June 2018 (has links)
La CSI (Coherence Scanning Interferometry) ou la WLSI (White Light Scanning Interferometry) est une technique d'imagerie optique bien établie pour mesurer la rugosité de surface et la forme des surfaces microscopiques. Les avantages sont la sensibilité axiale nanométrique, un large champ de vision (des centaines de μm à plusieurs mm) et la vitesse de mesure (quelques secondes à quelques minutes). La technique est basée sur l'interférométrie optique avec une configuration de Linnik très difficile à ajuster mais elle présente plusieurs avantages: des objectifs d'ouverture numérique plus élevés pour améliorer la résolution spatiale; longue distance de travail, car il n'y a aucun besoin de l'un des composants devant l'objectif; une configuration de mode de lumière polarisée; franges contrastées en raison de la possibilité de modifier les voies optiques et les intensités des deux bras indépendamment. Alors que l'utilisation d'un objectif d'immersion dans l'eau présente les avantages suivants: éviter les problèmes liés à l'ajustement entre la formation des franges et le plan de formation de l'image; et pour minimiser la différence de dispersion entre les bras de l'interféromètre. Afin de pouvoir mesurer en mode eau et d'obtenir des échantillons à haute résolution latérale de types chimiques et biologiques, plusieurs défis doivent être surmontés tels que l'équilibrage de l'OPD sur les deux bras; trouver et ajuster les bonnes franges de contraste; trouver et adapter une compensation adéquate de l'eau dans le bras de référence horizontal pour faire fonctionner un système dans l'eau. / Coherence Scanning Interferometry (CSI) or White Light Scanning Interferometry (WLSI) is a well-established optical imaging technique for measuring the surface roughness and the shape of microscopic surfaces. The advantages are the nanometric axial sensitivity, a wide field of view (hundreds of μm to several mm) and the measurement speed (a few seconds to a few minutes). The technique is based on optical interferometry with a Linnik configuration which very difficult to adjust but it offers several advantages: higher numerical aperture objectives to improve spatial resolution; long working distance, because there is no need for any of the components in front of the lens; a polarized light mode configuration; contrasting fringes because of the possibility of modifying the optical pathways and the intensities of the two arms independently. While the use of a water-immersion objective gives the following advantages: to avoid the problems related to the adjustment between the formation of the fringes and the plane of formation of the image; and to minimize the difference in dispersion between the arms of the interferometer. In order to be able to measure in water mode and to obtain high lateral resolution samples of chemical and biological types, several challenges must be overcome such as balancing the OPD on both arms; finding and adjusting the good contrast fringes; finding and adapting a suitable water compensation of water in horizontal reference arm to operate a system in water.
|
Page generated in 0.2691 seconds