221 |
Image coding with H.264 I-frames / Stillbildskodning med H.264 I-framesEklund, Anders January 2007 (has links)
<p>In this thesis work a part of the video coding standard H.264 has been implemented. The part of the video coder that is used to code the I-frames has been implemented to see how well suited it is for regular image coding. The big difference versus other image coding standards, such as JPEG and JPEG2000, is that this video coder uses both a predictor and a transform to compress the I-frames, while JPEG and JPEG2000 only use a transform. Since the prediction error is sent instead of the actual pixel values, a lot of the values are zero or close to zero before the transformation and quantization. The method is much like a video encoder but the difference is that blocks of an image are predicted instead of frames in a video sequence.</p> / <p>I det här examensarbetet har en del av videokodningsstandarden H.264 implementerats. Den del av videokodaren som används för att koda s.k. I-bilder har implementerats för att testa hur bra den fungerar för ren stillbildskodning. Den stora skillnaden mot andra stillbildskodningsmetoder, såsom JPEG och JPEG2000, är att denna videokodaren använder både en prediktor och en transform för att komprimera stillbilderna, till skillnad från JPEG och JPEG2000 som bara använder en transform. Eftersom prediktionsfelen skickas istället för själva pixelvärdena så är många värden lika med noll eller nära noll redan innan transformationen och kvantiseringen. Metoden liknar alltså till mycket en ren videokodare, med skillnaden att man predikterar block i en bild istället för bilder i en videosekvens.</p>
|
222 |
Implementering av realtidsvideolänk med MPEG- och wavelet-teknik / Implementation of aReal Time Video Transmission Link using MPEG- and Wavelet MethodsHeijdenberg, Karl, Johansson, Thomas January 2004 (has links)
At Saab Aerosystems, situated in Linköping Sweden, there is a presentation and manoeuvre simulator simulating the fighter jet JAS-39 Gripen. This flight simulator is called PMSIM. In this thesis we study how to transfer sensor images generated by PMSIM to other simulators or desktop computers. The transmission is band-limited so some kind of image coding must be used. Because of this the greater part of this thesis is concerned with image coding. To fulfill the real time requirement the image coding has to be quite simple and the transmission has to be fast. To achieve fast transmission the network protocol has to use as little overhead information as possible. Such a protocol has therefore been designed and implemented. This report also includes a survey about real radio links. This survey investigates how the quality of the video stream can be affected by noise and other disturbing elements. The work in this report revolves around the implementation of a video link. The purpose of this link is to transmit and display sensor images. The link consists mainly of the three following parts: image coder, network link and image player. The image coding has been focused on MPEG and wavelets. The wavelet technique is not a well known coding principle for video applications. Although as a coding technique for still images the technique is well known. For instance it is used in the JPEG2000-standard. Experiments conducted and published in this report suggest that for some applications the wavelet technique can be a viable candidate, with respect to the MPEG technique, for a video coder.
|
223 |
Ultra High Compression For Weather Radar Reflectivity DataMakkapati, Vishnu Vardhan 11 1900 (has links)
Weather is a major contributing factor in aviation accidents, incidents and delays.
Doppler weather radar has emerged as a potent tool to observe weather. Aircraft carry an onboard radar but its range and angular resolution are limited. Networks of ground-based weather radars provide extensive coverage of weather over large geographic regions. It would be helpful if these data can be transmitted to the pilot. However, these data are highly voluminous and the bandwidth of the ground-air communication links is limited and expensive. Hence, these data have to be compressed to an extent where they are
suitable for transmission over low-bandwidth links. Several methods have been developed to compress pictorial data. General-purpose schemes do not take into account the
nature of data and hence do not yield high compression ratios. A scheme for extreme
compression of weather radar data is developed in this thesis that does not significantly degrade the meteorological information contained in these data.
The method is based on contour encoding. It approximates a contour by a set of
systematically chosen ‘control’ points that preserve its fine structure upto a certain level. The contours may be obtained using a thresholding process based on NWS or custom
reflectivity levels. This process may result in region and hole contours, enclosing ‘high’ or ‘low’ areas, which may be nested. A tag bit is used to label region and hole contours. The control point extraction method first obtains a smoothed reference contour by averaging the original contour. Then the points on the original contour with maximum deviation from the smoothed contour between the crossings of these contours are identified and are designated as control points. Additional control points are added midway between
the control point and the crossing points on either side of it, if the length of the segment between the crossing points exceeds a certain length. The control points, referenced with respect to the top-left corner of each contour for compact quantification, are transmitted to the receiving end.
The contour is retrieved from the control points at the receiving end using spline
interpolation. The region and hole contours are identified using the tag bit. The pixels
between the region and hole contours at a given threshold level are filled using the color corresponding to it. This method is repeated till all the contours for a given threshold level are exhausted, and the process is carried out for all other thresholds, thereby resulting in a composite picture of the reconstructed field.
Extensive studies have been conducted by using metrics such as compression ratio,
fidelity of reconstruction and visual perception. In particular the effect of the smoothing factor, the choice of the degree of spline interpolation and the choice of thresholds are studied. It has been shown that a smoothing percentage of about 10% is optimal for most data. A degree 2 of spline interpolation is found to be best suited for smooth contour reconstruction. Augmenting NWS thresholds has resulted in improved visual perception, but at the expense of a decrease in the compression ratio.
Two enhancements to the basic method that include adjustments to the control points to achieve better reconstruction and bit manipulations on the control points to
obtain higher compression are proposed. The spline interpolation inherently tends to
move the reconstructed contour away from the control points. This has been somewhat
compensated by stretching the control points away from the smoothed reference contour.
The amount and direction of stretch are optimized with respect to actual data fields to yield better reconstruction. In the bit manipulation study, the effects of discarding
the least significant bits of the control point addresses are analyzed in detail. Simple bit truncation introduces a bias in the contour description and reconstruction, which is removed to a great extent by employing a bias compensation mechanism. The results obtained are compared with other methods devised for encoding weather radar contours.
|
224 |
Strategien für die Instruktionscodekompression in cachebasierten, eingebetteten Systemen /Jachalsky, Jörn. January 1900 (has links)
Thesis--Technische Universität Hannover. / Includes bibliographical references.
|
225 |
Vector wavelet transforms for the coding of static and time-varying vector fieldsHua, Li. January 2003 (has links)
Thesis (Ph. D.)--Mississippi State University. Department of Electrical and Computer Engineering. / Title from title screen. Includes bibliographical references.
|
226 |
Object-based unequal error protectionMarka, Madhavi. January 2002 (has links)
Thesis (M.S.) -- Mississippi State University. Department of Electrical and Computer Engineering. / Title from title screen. Includes bibliographical references.
|
227 |
Delay sensitive delivery of rich images over WLAN in telemedicine applicationsSankara Krishnan, Shivaranjani. January 2009 (has links)
Thesis (M. S.)--Electrical and Computer Engineering, Georgia Institute of Technology, 2009. / Committee Chair: Jayant, Nikil; Committee Member: Altunbasak, Yucel; Committee Member: Sivakumar, Raghupathy. Part of the SMARTech Electronic Thesis and Dissertation Collection.
|
228 |
Efficient data acquisition, transmission and post-processing for quality spiral Magnetic Resonance ImagingJutras, Jean-David Unknown Date
No description available.
|
229 |
Model Selection via Minimum Description LengthLi, Li 10 January 2012 (has links)
The minimum description length (MDL) principle originated from data compression literature and has been considered for deriving statistical model selection procedures. Most existing methods utilizing the MDL principle focus on models consisting of independent data, particularly in the context of linear regression. The data considered in this thesis are in the form of repeated measurements, and the exploration of MDL principle begins with classical linear mixed-effects models. We distinct two kinds of research focuses: one concerns the population parameters and the other concerns the cluster/subject parameters. When the research interest is on the population level, we propose a class of MDL procedures which incorporate the dependence structure within individual or cluster with data-adaptive penalties and enjoy the advantages of Bayesian information criteria. When the number of covariates is large, the penalty term is adjusted by data-adaptive structure to diminish the under selection issue in BIC and try to mimic the behaviour of AIC. Theoretical justifications are provided from both data compression and statistical perspectives. Extensions to categorical response modelled by generalized estimating equations and functional data modelled by functional principle components are illustrated. When the interest is on the cluster level, we use group LASSO to set up a class of candidate models. Then we derive a MDL criterion for this LASSO technique in a group manner to selection the final model via the tuning parameters. Extensive numerical experiments are conducted to demonstrate the usefulness of the proposed MDL procedures on both population level and cluster level.
|
230 |
Model Selection via Minimum Description LengthLi, Li 10 January 2012 (has links)
The minimum description length (MDL) principle originated from data compression literature and has been considered for deriving statistical model selection procedures. Most existing methods utilizing the MDL principle focus on models consisting of independent data, particularly in the context of linear regression. The data considered in this thesis are in the form of repeated measurements, and the exploration of MDL principle begins with classical linear mixed-effects models. We distinct two kinds of research focuses: one concerns the population parameters and the other concerns the cluster/subject parameters. When the research interest is on the population level, we propose a class of MDL procedures which incorporate the dependence structure within individual or cluster with data-adaptive penalties and enjoy the advantages of Bayesian information criteria. When the number of covariates is large, the penalty term is adjusted by data-adaptive structure to diminish the under selection issue in BIC and try to mimic the behaviour of AIC. Theoretical justifications are provided from both data compression and statistical perspectives. Extensions to categorical response modelled by generalized estimating equations and functional data modelled by functional principle components are illustrated. When the interest is on the cluster level, we use group LASSO to set up a class of candidate models. Then we derive a MDL criterion for this LASSO technique in a group manner to selection the final model via the tuning parameters. Extensive numerical experiments are conducted to demonstrate the usefulness of the proposed MDL procedures on both population level and cluster level.
|
Page generated in 0.1422 seconds