• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 10
  • 2
  • 2
  • Tagged with
  • 17
  • 17
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Channel modelling and relay for powerline communications

Tan, Bo January 2013 (has links)
The thesis discusses the channel modelling and relay techniques in powerline communications (PLC) which is considered as a promising technology for the Smart Grid communications, Internet access and home area network (HAN). In this thesis, the statistical PLC channel characteristics are investigated, a new statistical channel modelling method is proposed for the in-door PLC. Then a series of the relay protocols are suggested for the broadband communications over power grid. The statistical channel modelling method is proposed to surmount the limits of the traditional deterministic PLC channel models such as multipath model and transmission line model. To develop the channel model, the properties of the multipath magnitudes, interval between the paths, cable loss and the channel classification are investigated in detail. Then, each property is described by statistical distribution or formula. The simulation results show that the statistical model can describe the PLC channels as accurate as deterministic models without the topology information which is a time-consuming work for collecting. The relay transmission is proposed to help PLC adapting the diverse application scenarios. The protocols covers the main relay aspects which include decode/amplify forwarding, single/ multiple relay nodes, full/half duplex relay working mode. The capacity performance of each protocol is given and compared. A series of the facts which improve the performance of the PLC networks are figured out according to simulation results. The facts include that the decode-and-forward is more suitable for the PLC environment, deviation or transforming station is better location for placing relay node and full duplex relay working mode help exploiting the capacity potential of the PLC networks. Some future works are pointed out based on the work of statistical channel model and relay. In the last part of this thesis, an unit based statistical channel model is initialled for adapting various PLC channel conditions, a more practical relay scenario which contains multiple data terminals is proposed for approaching the realistic transmission scenario. At last, the relay for the narrowband PLC Smart Grid is also mentioned as future research topic.
2

Advanced system design and performance analysis for high speed optical communications

Pan, Jie 08 June 2015 (has links)
The Nyquist WDM system realizes a terabit high spectral efficiency transmission system by allocating several subcarriers close to or equal to the baud rate. This system achieves optimal performance by maintaining both temporal and spectral orthogonality. However, ISI and ICI effects are inevitable in practical Nyquist WDM implementations due to the imperfect channel response and tight channel spacing and may cause significant performance degradations. Our primary research goals are to combat the ISI effects via the transmitter digital pre-shaping and to remove the ICI impairments at the receiver using MIMO signal processing. First we propose two novel blind channel estimation techniques that enable the transmitter pre-shaping design for the ISI effects mitigation. Both numerical and experimental results demonstrate that the two methods are very effective in compensating the narrow band filtering and are very robust to channel estimation noise. Besides pre-shaping, the DAC-enabled transmitter chromatic dispersion compensation is also demonstrated in a system with high LO laser linewidth. Next a novel “super-receiver” structure is proposed, where different subchannels are synchronously sampled, and the baseband signals from three adjacent subchannels are processed jointly to remove ICI penalty. Three different ICI compensation methods are introduced and their performances compared. The important pre-processes that enable a successful ICI compensation are also elaborated. Despite ICI compensation, the joint carrier phase recovery based on the Viterbi-Viterbi algorithm is also studied in the carrier phase locked systems. In-band crosstalk arises from the imperfect switch elements in the add-drop process of ROADM-enabled DWDM systems and may cause significant performance degradation. Our third research topic is to demonstrate a systematic way to analyze and predict the in-band crosstalk-induced penalty. In this work, we propose a novel crosstalk-to-ASE noise weighting factor that can be combined with the weighted crosstalk weighting metric to incorporate the in-band crosstalk noise into the Gaussian noise model for performance prediction and analysis. With the aid of the Gaussian noise model, the in-band crosstalk-induced nonlinear noise is also studied. Both simulations and experiments are used to validate the proposed methods.
3

Υλοποίηση βαθμίδας ΨΕΣ (Ψηφιακής Επεξεργασίας Σήματος) συστήματος σύνθεσης ομιλίας με βάση τον αλγόριθμο ΗΝΜ. / HNM-based DSP (Digital Signal Processing) module implementation of a TTS system

Βασιλόπουλος, Ιωάννης 16 May 2007 (has links)
Ένα TTS (Τext-To-Speech) σύστημα μετατρέπει ένα οποιοδήποτε κείμενο στην αντιστοιχούσα ομιλία, η οποία έχει φυσικά χαρακτηριστικά. Το ΤΤS αποτελείται από δύο βαθμίδες, τη βαθμίδα Επεξεργασίας Φυσικής Γλώσσας (ΕΦΓ) και τη βαθμίδα Ψηφιακής Επεξεργασίας Σήματος (ΨΕΣ). Η βαθμίδα ΕΦΓ είναι υπεύθυνη για την σωστή ανάλυση του κειμένου εισόδου σε φωνήματα και το καθορισμό των επιθυμητών προσωδιακών χαρακτηριστικών, όπως το pitch, η διάρκεια και η ένταση του κάθε φωνήματος. Η βαθμίδα ΨΕΣ αναλαμβάνει να συνθέσει την ομιλία με τα επιθυμητά προσωδιακά χαρακτηρίστηκα, τα οποία έδωσε η βαθμίδα ΕΦΓ. Ένας τρόπος για να επιτευχθεί αυτό είναι με χρήση αλγορίθμων ανάλυσης και σύνθεσης ομιλίας, όπως ο αλγόριθμος HNM (Harmonic plus Noise Model).Ο ΗΝΜ μοντελοποιεί το σήμα ομιλίας ως άθροισμα δύο τμημάτων, ενός τμήματος με αρμονικά χαρακτηριστικά και ενός τμήματος με χαρακτηριστικά θορύβου. Χρησιμοποιώντας αυτό το μοντέλο γίνεται η ανάλυση και η σύνθεση του σήματος ομιλίας με ή χωρίς προσωδιακές μεταβολές. / A TTS (Text-To-Speech) System is used to convert any given text to its corresponding speech with natural characteristics. A TTS consists of two modules, the Natural Language Processing (NLP) module and the Digital Signal Processing (DSP) module. The NLP module analyses the input text and supplies the DSP module with the appropriate phonemes and prosodic modifications, with concern to pitch, duration and volume of each phoneme. Then the DSP module synthesizes speech with the target prosody, using speech analysis-synthesis algorithms such as HNM. HNM (Harmonic plus Noise Model) algorithm models speech signal as the sum two parts, the harmonic part and the noise part. Speech analysis and speech synthesis with or without modifications, is achieved using the harmonic and the noise part
4

ID Photograph hashing : a global approach / Hachage de photographie d’identité : une approche globale

Smoaca, Andreea 12 December 2011 (has links)
Cette thèse traite de la question de l’authenticité des photographies d’identité, partie intégrante des documents nécessaires lors d’un contrôle d’accès. Alors que les moyens de reproduction sophistiqués sont accessibles au grand public, de nouvelles méthodes / techniques doivent empêcher toute falsification / reproduction non autorisée de la photographie d’identité. Cette thèse propose une méthode de hachage pour l’authentification de photographies d’identité, robuste à l’impression-lecture. Ce travail met ainsi l’accent sur les effets de la numérisation au niveau de hachage. L’algorithme mis au point procède à une réduction de dimension, basée sur l’analyse en composantes indépendantes (ICA). Dans la phase d’apprentissage, le sous-espace de projection est obtenu en appliquant l’ICA puis réduit selon une stratégie de sélection entropique originale. Dans l’étape d’extraction, les coefficients obtenus après projection de l’image d’identité sur le sous-espace sont quantifiés et binarisés pour obtenir la valeur de hachage. L’étude révèle les effets du bruit de balayage intervenant lors de la numérisation des photographies d’identité sur les valeurs de hachage et montre que la méthode proposée est robuste à l’attaque d’impression-lecture. L’approche suivie en se focalisant sur le hachage robuste d’une classe restreinte d’images (d’identité) se distingue des approches classiques qui adressent une image quelconque / This thesis addresses the question of the authenticity of identity photographs, part of the documents required in controlled access. Since sophisticated means of reproduction are publicly available, new methods / techniques should prevent tampering and unauthorized reproduction of the photograph. This thesis proposes a hashing method for the authentication of the identity photographs, robust to print-and-scan. This study focuses also on the effects of digitization at hash level. The developed algorithm performs a dimension reduction, based on independent component analysis (ICA). In the learning stage, the subspace projection is obtained by applying ICA and then reduced according to an original entropic selection strategy. In the extraction stage, the coefficients obtained after projecting the identity image on the subspace are quantified and binarized to obtain the hash value. The study reveals the effects of the scanning noise on the hash values of the identity photographs and shows that the proposed method is robust to the print-and-scan attack. The approach focusing on robust hashing of a restricted class of images (identity) differs from classical approaches that address any image
5

Multiscale Total Variation Estimators for Regression and Inverse Problems

Álamo, Miguel del 24 May 2019 (has links)
No description available.
6

A mathematical model of noise in narrowband power line communication systems

Katayama, Masaaki, Yamazato, Takaya, Okada, Hiraku, 片山, 正昭, 山里, 敬也, 岡田, 啓 07 1900 (has links)
No description available.
7

Comparison and Testing of Various Noise Wall Materials

Theberge, Ryan C. January 2014 (has links)
No description available.
8

Modélisation du bruit et étalonnage de la mesure de profondeur des caméras Temps-de-Vol / Noise modeling and calibration of the measuring depth of cameras Time-of-Flight

Belhedi, Amira 04 July 2013 (has links)
Avec l'apparition récente des caméras 3D, des perspectives nouvelles pour différentes applications de l'interprétation de scène se sont ouvertes. Cependant, ces caméras ont des limites qui affectent la précision de leurs mesures. En particulier pour les caméras Temps-de-Vol, deux types d'erreur peuvent être distingués : le bruit statistique de la caméra et la distorsion de la mesure de profondeur. Dans les travaux de la littérature des caméras Temps-de-Vol, le bruit est peu étudié et les modèles de distorsion de la mesure de profondeur sont généralement difficiles à mettre en œuvre et ne garantissent pas la précision requise pour certaines applications. L'objectif de cette thèse est donc d'étudier, modéliser et proposer un étalonnage précis et facile à mettre en œuvre de ces 2 types d'erreur des caméras Temps-de-Vol. Pour la modélisation du bruit comme pour la distorsion de la mesure de profondeur, deux solutions sont proposées présentant chacune une solution à un problème différent. La première vise à fournir un modèle précis alors que le second favorise la simplicité de la mise en œuvre. Ainsi, pour le bruit, alors que la majorité des modèles reposent uniquement sur l'information d'amplitude, nous proposons un premier modèle qui intègre aussi la position du pixel dans l'image. Pour encore une meilleure précision, nous proposons un modèle où l'amplitude est remplacée par la profondeur de l'objet et le temps d'intégration. S'agissant de la distorsion de la mesure de profondeur, nous proposons une première solution basée sur un modèle non-paramétrique garantissant une meilleure précision. Ensuite, pour fournir une solution plus facile à mettre en œuvre que la précédente et que celles de l'état de l'art, nous nous basons sur la connaissance à priori de la géométrie planaire de la scène observée. / 3D cameras open new possibilities in different fields such as 3D reconstruction, Augmented Reality and video-surveillance since they provide depth information at high frame-rates. However, they have limitations that affect the accuracy of their measures. In particular for TOF cameras, two types of error can be distinguished : the stochastic camera noise and the depth distortion. In state of the art of TOF cameras, the noise is not well studied and the depth distortion models are difficult to use and don't guarantee the accuracy required for some applications. The objective of this thesis is to study, to model and to propose a calibration method of these two errors of TOF cameras which is accurate and easy to set up. Both for the noise and for the depth distortion, two solutions are proposed. Each of them gives a solution for a different problem. The former aims to obtain an accurate model. The latter, promotes the simplicity of the set up. Thereby, for the noise, while the majority of the proposed models are only based on the amplitude information, we propose a first model which integrate also the pixel position in the image. For a better accuracy, we propose a second model where we replace the amplitude by the depth and the integration time. Regarding the depth distortion, we propose a first solution based on a non-parametric model which guarantee a better accuracy. Then, we use the prior knowledge of the planar geometry of the observed scene to provide a solution which is easier to use compared to the previous one and to those of the litterature.
9

Quantization of Random Processes and Related Statistical Problems

Shykula, Mykola January 2006 (has links)
<p>In this thesis we study a scalar uniform and non-uniform quantization of random processes (or signals) in average case setting. Quantization (or discretization) of a signal is a standard task in all nalog/digital devices (e.g., digital recorders, remote sensors etc.). We evaluate the necessary memory capacity (or quantization rate) needed for quantized process realizations by exploiting the correlation structure of the model random process. The thesis consists of an introductory survey of the subject and related theory followed by four included papers (A-D).</p><p>In Paper A we develop a quantization coding method when quantization levels crossings by a process realization are used for its coding. Asymptotical behavior of mean quantization rate is investigated in terms of the correlation structure of the original process. For uniform and non-uniform quantization, we assume that the quantization cellwidth tends to zero and the number of quantization levels tends to infinity, respectively.</p><p>In Papers B and C we focus on an additive noise model for a quantized random process. Stochastic structures of asymptotic quantization errors are derived for some bounded and unbounded non-uniform quantizers when the number of quantization levels tends to infinity. The obtained results can be applied, for instance, to some optimization design problems for quantization levels.</p><p>Random signals are quantized at sampling points with further compression. In Paper D the concern is statistical inference for run-length encoding (RLE) method, one of the compression techniques, applied to quantized stationary Gaussian sequences. This compression method is widely used, for instance, in digital signal and image processing. First, we deal with mean RLE quantization rates for various probabilistic models. For a time series with unknown stochastic structure, we investigate asymptotic properties (e.g., asymptotic normality) of two estimates for the mean RLE quantization rate based on an observed sample when the sample size tends to infinity.</p><p>These results can be used in communication theory, signal processing, coding, and compression applications. Some examples and numerical experiments demonstrating applications of the obtained results for synthetic and real data are presented.</p>
10

Quantization of Random Processes and Related Statistical Problems

Shykula, Mykola January 2006 (has links)
In this thesis we study a scalar uniform and non-uniform quantization of random processes (or signals) in average case setting. Quantization (or discretization) of a signal is a standard task in all nalog/digital devices (e.g., digital recorders, remote sensors etc.). We evaluate the necessary memory capacity (or quantization rate) needed for quantized process realizations by exploiting the correlation structure of the model random process. The thesis consists of an introductory survey of the subject and related theory followed by four included papers (A-D). In Paper A we develop a quantization coding method when quantization levels crossings by a process realization are used for its coding. Asymptotical behavior of mean quantization rate is investigated in terms of the correlation structure of the original process. For uniform and non-uniform quantization, we assume that the quantization cellwidth tends to zero and the number of quantization levels tends to infinity, respectively. In Papers B and C we focus on an additive noise model for a quantized random process. Stochastic structures of asymptotic quantization errors are derived for some bounded and unbounded non-uniform quantizers when the number of quantization levels tends to infinity. The obtained results can be applied, for instance, to some optimization design problems for quantization levels. Random signals are quantized at sampling points with further compression. In Paper D the concern is statistical inference for run-length encoding (RLE) method, one of the compression techniques, applied to quantized stationary Gaussian sequences. This compression method is widely used, for instance, in digital signal and image processing. First, we deal with mean RLE quantization rates for various probabilistic models. For a time series with unknown stochastic structure, we investigate asymptotic properties (e.g., asymptotic normality) of two estimates for the mean RLE quantization rate based on an observed sample when the sample size tends to infinity. These results can be used in communication theory, signal processing, coding, and compression applications. Some examples and numerical experiments demonstrating applications of the obtained results for synthetic and real data are presented.

Page generated in 0.0736 seconds