• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 3
  • 1
  • Tagged with
  • 7
  • 7
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Analogue to information system based on PLL-based frequency synthesizers with fast locking schemes

Lin, Ming-Lang January 2010 (has links)
Data conversion is the crucial interface between the real world and digital processing systems. Analogue-to-digital converters and digital-to-analogue converters are two key conversion devices and used as the interface. Up to now, the conventional ADCs based on Nyquist sampling theorem are facing a critical challenge: the resolution and the sampling rate must be radically increased when some applications such as radar detection and ultra-wideband communication emerge. The offset of comparators and the setup time of sample-and-hold circuits, however, limit the resulution and clock rate of ADCs. Alternatively, in some applications such as speech, temperature sensor, etc. signals remain possibly unchanged for prolonged periods with brief bursts of significant activity. If trational ADCs are employed in such circumstances a higher bandwidth is required for transmitting the converted samples. On the other hand, sampling signals with an extremely high clock rate are also required for converting the signals with the feature of sparsity in time domain. The level-crossing sampling scheme (LCSS) is one of the data conversions suitable for converting signals with the sparsity feature and brief bursts of signigicant activity. due to the traditional LCSS with a fixed clock rate being limited in applications a novel irregular data conversion scheme called analogue-to-information system (AIS) is proposed in this thesis. The AIS is typically based upon LCSS, but an adjustable clock generator and a real time data compression scheme are applied to it. As the system-level simulations results of AIS show it can be seen that a data transmission saving rate nearly 30% is achieved for different signals. PLLs with fast pull-in and locking schemes are very important when they are applied in TDMA systems and fequency hopping wireless systems. So a novel triple path nonlinear phase frequency detector (TPNPFD) is also proposed in this thesis. Compared to otherPFDs, the pll-in and locking time in TPNPFD is much shorter. A proper transmission data format can make the recreation of the skipped samples and the reconstruction of the original signal more efficient, i.e. they can be achieved in a minimum number of the received data without increasing much more hardware complexity. So the preliminary data format used for transmitting the converted data from AIS is also given in the final chapter of this thesis for future works.
2

Το φίλτρο Kalman σε ανομοιόμορφη δειγματοληψία

Τριανταφύλλου, Θωμαΐα 21 October 2011 (has links)
Σε αυτήν την διπλωματική εργασία ασχολούμαστε με το φίλτρο Kalman σε ανομοιόμορφη δειγματοληψία. Τα προαναφερθέντα αντικείμενα της εργασίας είτε χρησιμοποιούνται ξεχωριστά το ένα από το άλλο είτε εάν συνδυάζονται αποτελούν πάρα πολύ σημαντικά εργαλεία για κάθε επιστήμη και τεχνολογία. Το φίλτρο Kalman χρησιμοποιείται με μεγάλη επιτυχία για εκτίμηση και ανάλυση δυναμικών συστημάτων. Οι εφαρμογές του καλύπτουν πολλά πεδία όπως την μηχανική, την επιστήμη των υλικών, τα οικονομικά, ακόμα και την ιατρική. Από την άλλη, η χρήση της ανομοιόμορφης δειγματοληψίας, δηλαδή η δειγματοληψία σημάτων σε ανομοιόμορφα χρονικά διαστήματα αυξάνει συνεχώς και διαθέτει πάρα πολλά πλεονεκτήματα. Σκοπός της διπλωματικής είναι η μελέτη και η ανάλυση αυτών των δύο στοιχείων και η εξαγωγή συμπερασμάτων όσον αφορά τον καλύτερο δυνατό αλγόριθμο επεξεργασίας σήματος. Έτσι στο 1ο Κεφάλαιο ασχολούμαστε με την γενικότερη έννοια της δειγματοληψίας, αλλά αναλύουμε και την ανομοιόμορφη. Στο 2ο Κεφάλαιο κάνουμε μια αρχική εισαγωγή για το πώς συνεργάζονται τα φίλτρα με την δειγματοληψία. Έπειτα, στο 3ο Κεφάλαιο αναφέρουμε λεπτομερειακά τις θεωρητικές και υπολογιστικές έννοιες γύρω από το φίλτρο Kalman. Το 4ο Κεφάλαιο περιλαμβάνει την υλοποίηση αλγορίθμων φίλτρου Kalman με ομοιόμορφη και ανομοιόμορφη δειγματοληψία. Στη συνέχεια, στο 5ο Κεφάλαιο παραθέτουμε την σύγκριση των αλγορίθμων και το συμπέρασμα για το οποιός είναι ο αποτελεσματικότερος και αναφέρουμε κάποιες εφαρμογές. Τέλος, το 6ο Κεφάλαιο εμφανίζεται το παράρτημα των κωδίκων που χρησιμοποιήθηκαν και στο 7ο Κεφάλαιο παραθέτουμε τις πηγές που αναλύσαμε. / In this thesis we deal with the Kalman filter to irregular sampling. The above subjects of the essay either they are used separately from one another or they are combined, they are very important tools for any science and technology. The Kalman filter is used with great success for observing and analyzing each dynamic system. Its applications cover several fields such as engineering, materials science, economics, and even medicine. On the other hand, the use of non-uniform sampling, which is the procedure of sampling some signals at uneven intervals, is growing continuously and has many advantages. The aim of this essay is the study and analysis of both subjects and the export of conclusions about the best possible signal processing algorithm. So in the first chapter we deal with the general concept of sampling, but we analyze the irregular too. In the second chapter we make an initial introduction to how the filters cooperate with the sampling. Then, in the third chapter we report in detail the theoretical and computational concepts around the filter Kalman. The fourth chapter includes the implementation of Kalman filter algorithms with uniform and non-uniform sampling. Then, in Chapter 5 we present a comparison of algorithms and the conclusion on which is the most effective and we mention some applications. Finally, in the 6th chapter the Appendix of the codes is appeared and in the seventh chapter we cited the bibliograpfy we have analyzed.
3

Irregular sampling: from aliasing to noise

Hennenfent, Gilles, Herrmann, Felix J. January 2007 (has links)
Seismic data is often irregularly and/or sparsely sampled along spatial coordinates. We show that these acquisition geometries are not necessarily a source of adversity in order to accurately reconstruct adequately-sampled data. We use two examples to illustrate that it may actually be better than equivalent regularly subsampled data. This comment was already made in earlier works by other authors. We explain this behavior by two key observations. Firstly, a noise-free underdetermined problem can be seen as a noisy well-determined problem. Secondly, regularly subsampling creates strong coherent acquisition noise (aliasing) difficult to remove unlike the noise created by irregularly subsampling that is typically weaker and Gaussian-like
4

Signal processing issues related to deterministic sea wave prediction

Abusedra, Lamia January 2009 (has links)
The bulk of the research work in wave related areas considers sea waves as stochastic objects leading to wave forecasting techniques based on statistical approaches. Due to the complex dynamics of the sea waves’ behaviour, statistical techniques are probably the only viable approach when forecasting over substantial spatial and temporal intervals. However this view changes when limiting the forecasting time to a few seconds or when the goal is to estimate the quiescent periods that occur due to the beating interaction of the wave components, especially in narrow band seas. This work considers the multi disciplinary research field of deterministic sea wave prediction (DSWP), exploring different aspects of DSWP associated with shallow angle LIDAR systems. The main goal of this project is to study and develop techniques to reduce the prediction error. The first part deals with issues related to shallow angle LIDAR systems data problems, while the remaining part of this work concentrates on the prediction system and propagation models regardless of the source of the data. The two main LIDAR data problems addressed in this work are the non-uniform distribution and the shadow region problems. An empirical approach is used to identify the characteristics of shadow regions associated with different wave conditions and different laser position. A new reconstruction method is developed to address the non-uniformed sampling problem, it is shown that including more information about the geometry and the dynamics of the problem improves the reconstruction error considerably. The frequency domain approach to the wave propagation model is examined. The effect of energy leakage on the prediction error is illustrated. Two approaches are explored to reduce this error. First a modification of the simple dispersive phase shifting filter is tested and shown to improve the prediction. The second approach is to reduce the energy leakage with an iterative Window-Expansion method. Significant improvement of the prediction error is achieved using this method in comparison to the End-Matching method typically used in DSWP systems. The final part in examining the frequency domain approach is to define the prediction region boundaries associated with a given prediction accuracy. The second propagation model approach is the Time/Space domain approach. In this method the convolution of the measured data and the propagation filter impulse response is used in the prediction system. In this part of the research work properties of these impulse responses are identified. These are found to be quite complicated representations. The relation between the impulse response (duration and shift) with prediction time and distance are studied. Quantification of these impulse responses properties are obtained by polynomial approximation and non-symmetric filter analysis. A new method is shown to associate the impulse response properties to the prediction region of both the Fixed Time and Fixed Point mode.
5

Improving the Modeling Framework for DCE-MRI Data in Hepatic Function Evaluation

Mossberg, Anneli January 2013 (has links)
Background Mathematical modeling combined with prior knowledge of the pharmacokinetics of the liver specific contrast agent Gd-EOB-DTPA has the potential to extract more information from Dynamic Contrast Enhanced Magnetic Resonance Imaging (DCE-MRI) data than previously possible. The ultimate goal of that work is to create a liver model that can describe DCE-MRI data well enough to be used as a diagnostic tool in liver function evaluation. Thus far this goal has not been fully reached and there is still some work to be done in this area. In this thesis, an already existing liver model will be implemented in the software Wolfram SystemModeler (WSM), the corresponding modeling framework will be further developed to better handle the temporally irregular sampling of DCE-MRI data and finally an attempt will be made to determine an optimal sampling design in terms of when and how often to collect images. In addition to these original goals, the work done during this project revealed two more issues that needed to be dealt with. Firstly, new standard deviation (SD) estimation methods regarding non-averaged DCE-MRI data were required in order to statistically evaluate the models. Secondly, the original model’s poor capability of describing the early dynamics of the system led to the creation of an additional liver model in attempt to model the bolus effect. Results The model was successfully implemented in WSM whereafter regional optimization was implemented as an attempt to handle clustered data. Tests on the available data did not result in any substantial difference in optimization outcome, but since the analyses were performed on only three patient data sets this is not enough to disregard the method. As a means of determining optimal sampling times, the determinant of the inverse Fisher Information Matrix was minimized, which revealed that frequent sampling is most important during the initial phase (~50-300 s post injection) and at the very end (~1500-1800 s). Three new means of estimating the SD were proposed. Of these three, a spatio-temporal SD was deemed most reasonable under the current circumstances. If a better initial fit is achieved, yet another method of estimating the variance as an optimization parameter might be implemented.    As a result of the new standard deviation the model failed to be statistically accepted during optimizations. The additional model that was created to include the bolus effect, and therefore be better able to fit the initial phase data, was also rejected. Conclusions The value of regional optimization is uncertain at this time and additional tests must be made on a large number of patient data sets in order to determine its value. The Fisher Information Matrix will be of great use in determining when and how often to sample once the model has achieved a more acceptable model fit in both the early and the late phase of the system. Even though the indications that it is important to sample densely in the early phase is rather intuitive due to a poor model fit in that region, the analyses also revealed that the final observations have a relatively high impact on the model prediction error. This was not previously known. Hence, an important measurement of how suitable the sampling design is in terms of the resulting model accuracy has been suggested. The original model was rejected due to its inability to fit the data during the early phase. This poor initial fit could not be improved enough by modelling the bolus effect and so the new implementation of the model was also rejected. Recommendations have been made in this thesis that might assist in the further development the liver model so that it can describe the true physiology and behaviour of the system in all phases. Such recommendations include, but are not limited to, the addition of an extra blood plasma compartment, a more thorough modelling of the spleen’s uptake of the contrast agent and a separation of certain differing signals that are now averaged.
6

Irregularly sampled image resortation and interpolation

Facciolo Furlan, Gabriele 03 March 2011 (has links)
The generation of urban digital elevation models from satellite images using stereo reconstruction techniques poses several challenges due to its precision requirements. In this thesis we study three problems related to the reconstruction of urban models using stereo images in a low baseline disposition. They were motivated by the MISS project, launched by the CNES (Centre National d'Etudes Spatiales), in order to develop a low baseline acquisition model. The first problem is the restoration of irregularly sampled images and image fusion using a band limited interpolation model. A novel restoration algorithm is proposed, which incorporates the image formation model as a set of local constraints, and uses of a family of regularizers that allow to control the spectral behavior of the solution. Secondly, the problem of interpolating sparsely sampled images is addressed using a self-similarity prior. The related problem of image inpainting is also considered, and a novel framework for exemplar-based image inpainting is proposed. This framework is then extended to consider the interpolation of sparsely sampled images. The third problem is the regularization and interpolation of digital elevation models imposing geometric restrictions. The geometric restrictions come from a reference image. For this problem three different regularization models are studied: an anisotropic minimal surface regularizer, the anisotropic total variation and a new piecewise affine interpolation algorithm. / La generación de modelos urbanos de elevación a partir de imágenes de satélite mediante técnicas de reconstrucción estereoscópica presenta varios retos debido a sus requisitos de precisión. En esta tesis se estudian tres problemas vinculados a la generación de estos modelos partiendo de pares estereoscópicos adquiridos por satélites en una configuración con baseline pequeño. Estos problemas fueron motivados por el proyecto MISS, lanzado por el CNES (Centre National d'Etudes Spatiales) con el objetivo de desarrollar las técnicas de reconstrucción para imágenes adquiridas con baseline pequeños. El primer problema es la restauración de imágenes muestreadas irregularmente y la fusión de imágenes usando un modelo de interpolación de banda limitada. Se propone un nuevo método de restauración, el cual usa una familia de regularizadores que permite controlar el decaimiento espectral de la solución e incorpora el modelo de formación de imagen como un conjunto de restricciones locales. El segundo problema es la interpolación de imágenes muestreadas en forma dispersa usando un prior de auto similitud, se considera también el problema relacionado de inpainting de imágenes. Se propone un nuevo framework para inpainting basado en ejemplares, el cual luego es extendido a la interpolación de imágenes muestreadas en forma dispersa. El tercer problema es la regularización e interpolación de modelos digitales de elevación imponiendo restricciones geométricas las cuales se extraen de una imagen de referencia. Para este problema se estudian tres modelos de regularización: un regularizador anisótropo de superficie mínima, la variación total anisótropa y un nuevo algoritmo de interpolación afín a trozos.
7

Signal Processing Methods for Ultra-High Resolution Scatterometry

Williams, Brent A. 05 April 2010 (has links) (PDF)
This dissertation approaches high resolution scatterometry from a new perspective. Three related general topics are addressed: high resolution σ^0 imaging, wind estimation from high resolution σ^0 images over the ocean, and high resolution wind estimation directly from the scatterometer measurements. Theories of each topic are developed, and previous approaches are generalized and formalized. Improved processing algorithms for these theories are developed, implemented for particular scatterometers, and analyzed. Specific results and contributions are noted below. The σ^0 imaging problem is approached as the inversion of a noisy aperture-filtered sampling operation-extending the current theory to deal explicitly with noise. A maximum aposteriori (MAP) reconstruction estimator is developed to regularize the problem and deal appropriately with noise. The method is applied to the SeaWinds scatterometer and the Advanced Scatterometer (ASCAT). The MAP approach produces high resolution σ^0 images without introducing the ad-hoc processing steps employed in previous methods. An ultra high resolution (UHR) wind product has been previously developed and shown to produce valuable high resolution information, but the theory has not been formalized. This dissertation develops the UHR sampling model and noise model, and explicitly states the implicit assumptions involved. Improved UHR wind retrieval methods are also developed. The developments in the σ^0 imaging problem are extended to deal with the nonlinearities involved in wind field estimation. A MAP wind field reconstruction estimator is developed and implemented for the SeaWinds scatterometer. MAP wind reconstruction produces a wind field estimate that is consistent with the conventional product, but with higher resolution. The MAP reconstruction estimates have a resolution similar to the UHR estimates, but with less noise. A hurricane wind model is applied to obtain an informative prior used in MAP estimation, which reduces noise and ameliorates ambiguity selection and rain contamination.

Page generated in 0.0794 seconds