• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 21
  • 8
  • 4
  • 3
  • 3
  • 1
  • 1
  • Tagged with
  • 44
  • 44
  • 8
  • 7
  • 7
  • 7
  • 6
  • 6
  • 5
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Blind Adaptive Multiuser Detection for DS-CDMA System Based on Sliding Window RLS Algorithm

Pan, Wei-Hung 10 September 2004 (has links)
Direct sequence code division multiple access (DS-CDMA) technique is one of the significant multiplexing technologies used in wireless communication services. In the DS-CDMA framework, all users have been assigned distinct signature code sequence to achieve multiple accesses within the same frequency band, and allow signal separating at the receiver. Under multipath fading environment with near-far effect, the current CDMA systems employed the RAKE receiver, to enhance the system performance. It is known that if training data is available the minimum mean squares error (MMSE) multiuser receiver, in which the average power of the receiver output is minimized subject to appropriate constraints, could be obtained by solving directly by the constrained Wiener estimation solution. However, if this is not the case, the blind multiuser receiver is an alternative approach to achieve desired performance closed to the one with the MMSE approach. In this thesis, based on the max/min criterion, the blind multiuser receiver, with linear constraints, is devised. Here constraint equations are written in parametric forms, which depend on the multipath structure of the signal of interest. Constraint parameters are jointly optimized with the parameters of the linear receiver to obtain the optimal parameters. In consequence, the sliding window linearly constrained RLS (SW-LC-RLS) algorithm is employed to implement the optimal blind receiver, with max/min approach. This new proposed scheme can be used to deal with multiple access interference (MAI) suppression for the environments, in which the narrow band interference (NBI) due to other systems is joined suddenly to the DS-CDMA systems, and having serious near-far effect. Under such circumstance, the channel character due to the NBI and near-far effect will become violent time varying, such that the conventional LC-RLS algorithm as well as LC-LMS algorithms could not perform well. Via computer simulation it confirms that our proposed scheme has better capability for MAI suppression in DS-CDMA systems than other existing schemes, and is more robust against the NBI and near-far problems.
22

Beam-Enabled Acoustic Link Establishment (BEALE) for underwater acoustic networks

Watkins, Karen Piecara 31 October 2013 (has links)
There is growing interest in developing reliable, high performance, underwater acoustic networks (UWANs). However, the acoustic communication channel, with its slow sound propagation, high signal attenuation, and low bandwidth, presents significant challenges to network designers. One advantage offered by the acoustic channel is the ability to form directional communication beams, which improve signal strength and reduce interference. The work presented here describes a novel medium access control protocol for UWANs designated Beam-Enabled Acoustic Link Establishment (BEALE). BEALE addresses the inherent challenges of the acoustic channel by incorporating two techniques: link-level scheduling and dynamic directional beam steering. BEALE neighbors exchange packets based on a link-level schedule negotiated between the two nodes. This scheduling allows nodes to steer transmit and receive beams in the appropriate direction at the appropriate time while minimizing control overhead. Using steered, directional beams increases the gain between sender and receiver, reduces the senders interference with other nodes, and, at the receiver, rejects possible interference from other nodes and noise sources common in the ocean, resulting in increased spatial reuse. The core protocol has been modeled in a UWAN simulator developed specifically for this research. The results demonstrate significant improvement in throughput and packet loss over two benchmark UWAN random access protocols when evaluated over a variety of spatial node topologies and traffic patterns. The core BEALE protocol is further enhanced herein by a Half-Duplex Sliding Window algorithm. The HDX Sliding window is shown through point-to-point simulation to markedly improve bandwidth utilization and error rate in large Bandwidth Delay Product (BDP) situations. Extension of the HDX Sliding Window to more complex multi-flow, two-way and multi-hop cases requires an additional level of communication coordination provided by the BEALE Sliding Window Scheduler presented here. The functional challenges and novel concept of the scheduler are described in detail. The BEALE protocol performance promotes a rich list of potential future research, such as rigorous characterization of the BEALE Sliding Window Scheduler, BEALE accommodation of mobile nodes, conceptual operability of a BEALE-enabled network of a central multi-beam sink node supporting large numbers of simple source nodes, and rate adaptation. / text
23

VLSI Implementation of Key Components in A Mobile Broadband Receiver

Huang, Yulin January 2009 (has links)
Digital front-end and Turbo decoder are the two key components in the digital wireless communication system. This thesis will discuss the implementation issues of both digital front-end and Turbo decoder.The structure of digital front-end for multi-standard radio supporting wireless standards such as IEEE802.11n, WiMAX, 3GPP LTE is investigated in the thesis. A top-to-down design methods. 802.11n digital down-converter is designed from Matlab model to VHDL implementation. Both simulation and FPGA prototyping are carried out.As another significant part of the thesis, a parallel Turbo decoder is designed and implemented for 3GPPLTE. The block size supported ranges from 40 to 6144 and the maximum number of iteration is eight.The Turbo decoder will use eight parallel SISO units to reach a throughput up to 150Mits.
24

Bernoulli HMMs for Handwritten Text Recognition

Giménez Pastor, Adrián 09 June 2014 (has links)
In last years Hidden Markov Models (HMMs) have received significant attention in the task off-line handwritten text recognition (HTR). As in automatic speech recognition (ASR), HMMs are used to model the probability of an observation sequence, given its corresponding text transcription. However, in contrast to what happens in ASR, in HTR there is no standard set of local features being used by most of the proposed systems. In this thesis we propose the use of raw binary pixels as features, in conjunction with models that deal more directly with the binary data. In particular, we propose the use of Bernoulli HMMs (BHMMs), that is, conventional HMMs in which Gaussian (mixture) distributions have been replaced by Bernoulli (mixture) probability functions. The objective is twofold: on the one hand, this allows us to better modeling the binary nature of text images (foreground/background) using BHMMs. On the other hand, this guarantees that no discriminative information is filtered out during feature extraction (most HTR available datasets can be easily binarized without a relevant loss of information). In this thesis, all the HMM theory required to develop a HMM based HTR toolkit is reviewed and adapted to the case of BHMMs. Specifically, we begin by defining a simple classifier based on BHMMs with Bernoulli probability functions at the states, and we end with an embedded Bernoulli mixture HMM recognizer for continuous HTR. Regarding the binary features, we propose a simple binary feature extraction process without significant loss of information. All input images are scaled and binarized, in order to easily reinterpret them as sequences of binary feature vectors. Two extensions are proposed to this basic feature extraction method: the use of a sliding window in order to better capture the context, and a repositioning method in order to better deal with vertical distortions. Competitive results were obtained when BHMMs and proposed methods were applied to well-known HTR databases. In particular, we ranked first at the Arabic Handwriting Recognition Competition organized during the 12th International Conference on Frontiers in Handwriting Recognition (ICFHR 2010), and at the Arabic Recognition Competition: Multi-font Multi-size Digitally Represented Text organized during the 11th International Conference on Document Analysis and Recognition (ICDAR 2011). In the last part of this thesis we propose a method for training BHMM classifiers using In last years Hidden Markov Models (HMMs) have received significant attention in the task off-line handwritten text recognition (HTR). As in automatic speech recognition (ASR), HMMs are used to model the probability of an observation sequence, given its corresponding text transcription. However, in contrast to what happens in ASR, in HTR there is no standard set of local features being used by most of the proposed systems. In this thesis we propose the use of raw binary pixels as features, in conjunction with models that deal more directly with the binary data. In particular, we propose the use of Bernoulli HMMs (BHMMs), that is, conventional HMMs in which Gaussian (mixture) distributions have been replaced by Bernoulli (mixture) probability functions. The objective is twofold: on the one hand, this allows us to better modeling the binary nature of text images (foreground/background) using BHMMs. On the other hand, this guarantees that no discriminative information is filtered out during feature extraction (most HTR available datasets can be easily binarized without a relevant loss of information). In this thesis, all the HMM theory required to develop a HMM based HTR toolkit is reviewed and adapted to the case of BHMMs. Specifically, we begin by defining a simple classifier based on BHMMs with Bernoulli probability functions at the states, and we end with an embedded Bernoulli mixture HMM recognizer for continuous HTR. Regarding the binary features, we propose a simple binary feature extraction process without significant loss of information. All input images are scaled and binarized, in order to easily reinterpret them as sequences of binary feature vectors. Two extensions are proposed to this basic feature extraction method: the use of a sliding window in order to better capture the context, and a repositioning method in order to better deal with vertical distortions. Competitive results were obtained when BHMMs and proposed methods were applied to well-known HTR databases. In particular, we ranked first at the Arabic Handwriting Recognition Competition organized during the 12th International Conference on Frontiers in Handwriting Recognition (ICFHR 2010), and at the Arabic Recognition Competition: Multi-font Multi-size Digitally Represented Text organized during the 11th International Conference on Document Analysis and Recognition (ICDAR 2011). In the last part of this thesis we propose a method for training BHMM classifiers using In last years Hidden Markov Models (HMMs) have received significant attention in the task off-line handwritten text recognition (HTR). As in automatic speech recognition (ASR), HMMs are used to model the probability of an observation sequence, given its corresponding text transcription. However, in contrast to what happens in ASR, in HTR there is no standard set of local features being used by most of the proposed systems. In this thesis we propose the use of raw binary pixels as features, in conjunction with models that deal more directly with the binary data. In particular, we propose the use of Bernoulli HMMs (BHMMs), that is, conventional HMMs in which Gaussian (mixture) distributions have been replaced by Bernoulli (mixture) probability functions. The objective is twofold: on the one hand, this allows us to better modeling the binary nature of text images (foreground/background) using BHMMs. On the other hand, this guarantees that no discriminative information is filtered out during feature extraction (most HTR available datasets can be easily binarized without a relevant loss of information). In this thesis, all the HMM theory required to develop a HMM based HTR toolkit is reviewed and adapted to the case of BHMMs. Specifically, we begin by defining a simple classifier based on BHMMs with Bernoulli probability functions at the states, and we end with an embedded Bernoulli mixture HMM recognizer for continuous HTR. Regarding the binary features, we propose a simple binary feature extraction process without significant loss of information. All input images are scaled and binarized, in order to easily reinterpret them as sequences of binary feature vectors. Two extensions are proposed to this basic feature extraction method: the use of a sliding window in order to better capture the context, and a repositioning method in order to better deal with vertical distortions. Competitive results were obtained when BHMMs and proposed methods were applied to well-known HTR databases. In particular, we ranked first at the Arabic Handwriting Recognition Competition organized during the 12th International Conference on Frontiers in Handwriting Recognition (ICFHR 2010), and at the Arabic Recognition Competition: Multi-font Multi-size Digitally Represented Text organized during the 11th International Conference on Document Analysis and Recognition (ICDAR 2011). In the last part of this thesis we propose a method for training BHMM classifiers using discriminative training criteria, instead of the conventionalMaximum Likelihood Estimation (MLE). Specifically, we propose a log-linear classifier for binary data based on the BHMM classifier. Parameter estimation of this model can be carried out using discriminative training criteria for log-linear models. In particular, we show the formulae for several MMI based criteria. Finally, we prove the equivalence between both classifiers, hence, discriminative training of a BHMM classifier can be carried out by obtaining its equivalent log-linear classifier. Reported results show that discriminative BHMMs clearly outperform conventional generative BHMMs. / Giménez Pastor, A. (2014). Bernoulli HMMs for Handwritten Text Recognition [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/37978 / TESIS
25

Interharmonic Analysis of Sustainable Energy Sources and Loads : Comparing two signal processing methods for estimation of interharmonics

Löfgren, Isabelle January 2020 (has links)
In this report, studies on interharmonics from three different measurement sites are performed. The first site is a wind park with three turbines, where the measurements are performed at the point of common coupling of these three. The second site is a network which consists of a PV inverter and two types of EV chargers – a DC charger or an AC charger. Measurements are performed with three different set-ups in this site – only AC charger connected, only DC charger connected, and AC charger and PV inverter connected simultaneously. The third site where measurements were made is a microgrid using frequency control in order to signal how the microgrid should operate at the moment. The interharmonic analysis was conducted using desynchronized processing technique (DP) and Sliding-Window Estimation of Signal Parameters via Rotational Invariance Techniques (SlidingWindow ESPRIT or SWESPRIT). The result from the wind park is that closely and evenly spaced interharmonics can be seen when the current suddenly increases (could be fast variations in wind speed). It is however uncertain if these interharmonics are caused by spectral leakage or not since SWESPRIT estimates the fundamental frequency to vary drastically when wind speed varies. It is observed that the SWESPRIT estimation of fundamental frequency could be caused by sudden changes in phase angle as the current varies. Further investigation and analysis are needed. The result from the measurements on the site with EV chargers and a PV inverter is that eight distinct patterns can be observed. Some patterns appear to come from the upstream grid, while some appear to be caused by either one of the EV chargers or the PV inverter, or interaction between them. Further studies are needed. The result from the microgrid measurements is that two distinct patterns at high frequencies (above 1000 Hz) can be observed during grid connected mode and island mode, respectively. During transitions between grid connection and island mode or vice versa, the fundamental frequency varies drastically, and it is therefore hard to analyse potential interharmonics and draw inferences. Further studies are needed. Advantages and disadvantages, as well as ideas for improvements, of the two applied signal processing methods are discussed throughout the different case-studies.
26

Detection of Pests in Agriculture Using Machine Learning

Olsson, Emma January 2022 (has links)
Pest inventory of a field is a way of knowing when the thresholds for pest controlis reached. It is of increasing interest to use machine learning to automate thisprocess, however, many challenges arise with detection of small insects both intraps and on plants.This thesis investigates the prospects of developing an automatic warning system for notifying a user of when certain pests are detected in a trap. For this, sliding window with histogram of oriented gradients based support vector machinewere implemented. Trap detection with neural network models and a check sizefunction were tested for narrowing the detections down to pests of a certain size.The results indicates that with further refinement and more training images thisapproach might hold potential for fungus gnat and rape beetles.Further, this thesis also investigates detection performance of Mask R-CNNand YOLOv5 on different insects in fields for the purpose of automating thedata gathering process. The models showed promise for detection of rape beetles. YOLOv5 also showed promise as a multi-class detector of different insects,where sizes ranged from small rape beetles to larger bumblebees.
27

Comparação de métodos de construção de haplótipos em estudo de associação genômica ampla com dados simulados / Comparision of haplotypes construction methods in genomic association studies with simulated data

Arce, Cherlynn Daniela da Silva 27 February 2018 (has links)
Submitted by CHERLYNN DANIELA DA SILVA ARCE null (cdprado@outlook.com) on 2018-04-03T20:24:26Z No. of bitstreams: 1 Dissertação_Cherlynn_Daniela_da_Silva_Arce.pdf: 1179630 bytes, checksum: c8a13228e501d97cb1dd118aca364265 (MD5) / Approved for entry into archive by Alexandra Maria Donadon Lusser Segali null (alexmar@fcav.unesp.br) on 2018-04-04T13:21:44Z (GMT) No. of bitstreams: 1 arce_cds_me_jabo.pdf: 1179630 bytes, checksum: c8a13228e501d97cb1dd118aca364265 (MD5) / Made available in DSpace on 2018-04-04T13:21:44Z (GMT). No. of bitstreams: 1 arce_cds_me_jabo.pdf: 1179630 bytes, checksum: c8a13228e501d97cb1dd118aca364265 (MD5) Previous issue date: 2018-02-27 / Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES) / Com o avanço dos estudos em genética e da tecnologia aplicada à genotipagem de marcadores moleculares, a identificação de polimorfismos associados às características de interesse econômico se tornou mais acessível, possibilitando a sua utilização aumentar a acurácia de modelos de predição do mérito genético dos animais. Esse avanço também possibilitou aumentar a acurácia dos estudos para identificação de QTLs para características de interesse econômico. Entretanto, os marcadores comumente utilizados para tal fim são os SNPs, que por serem bi-alélicos podem não ser muito eficientes na identificação dos QTLs. Os haplótipos, multi-alélicos, apresentam maior possibilidade de estarem em desequilíbrios de ligação (DL) com os QTLs. Dessa forma, objetivou-se no presente trabalho identificar o melhor método de construção de haplótipos para utilização em estudos de detecção de QTLs, a partir da comparação dos três métodos mais comumente utilizados para este fim. Foram utilizadas três populações simuladas representando características com três diferentes valores de herdabilidade, para as quais foram armazenados os dados fenotípicos, genotípicos e de pedigree dos 6.000 animais da população mais recente: Pop1 com herdabilidade baixa (0,10); Pop2 com herdabilidade moderada (0,25); e, Pop3 com herdabilidade alta (0,35). Os genomas simulados consistiram de 750.000 marcadores do tipo SNP, e 750 QTLs, com dois a quatro alelos, dispostos aleatoriamente em 29 cromossomos com tamanho total de 2.333 centimorgans (cM). A partir da simulação foram eliminados os SNPs cuja frequência do menor alelo foi menor que 0,1, restando 576.027, 577.189 e 576.675 marcadores para as populações Pop1, Pop2 e Pop3, respectivamente. A variação fenotípica foi de 1,0 e a variação dos QTLs foi de 50% das herdabilidades, para cada população. As médias dos DL para cada cromossomo, medidas pela estatística D', variaram de 0,20 até 0,30 para todas as populações, na última geração. Foram construídos haplótipos utilizando três métodos: Intervalo de Confiança (IC), Regra de Quatro Gametas (RQG) e Janelas Sobrepostas (JS). Para Pop1, no cromossomo 15, os métodos IC, RQG e JS identificaram cinco, oito e sete QTLs, respectivamente. Somente um QTL foi identificado nos cromossomos 19 e 29. Para a característica de herdabilidade alta, foi identificado um QTL no cromossomo 11. Em relação às análises de associação utilizando SNPs individuais, foram identificados quatro QTLs no cromossomo 15. Para a característica de herdabilidade moderada, não foram encontrados haplótipos ou SNPs isolados significativos. A metodologia de formação de haplótipos baseado na RQG foi considerada a mais eficiente para detecção de QTLs em relação aos métodos IC e JS, bem como ao uso dos SNPs isolados. / With the advancement of genetic studies and the technology applied to the genotyping of molecular markers, the identification of polymorphisms associated with the characteristics of economic interest became more accessible, allowing its use to increase the accuracy of prediction models of the genetic merit of the animals. This advance also made it possible to increase the accuracy of studies to identify QTLs for characteristics of economic interest. However, the commonly used markers for this purpose are SNPs, which because they are bi-allelic may not be very efficient in identifying QTLs. The haplotypes, multi-allelic, are more likely to be in linkage disequilibrium (LD) with QTLs. Thus, the objective of this work was to identify the best haplotype construction method for use in QTLs detection studies, by comparing the three methods most commonly used for this purpose. Three simulated populations representing characteristics with three different heritability values were used for which the phenotypic, genotypic and pedigree data of the 6,000 animals were stored: Pop1 with low heritability (0.10); Pop2 with moderate heritability (0.25); and, Pop3 with high heritability (0.35). The simulated genomes consisted of 750,000 SNP-type markers, and 750 QTLs, with two to four alleles, arranged randomly on 29 chromosomes with a total size of 2,333 centimorgans (cM). From the simulation the SNPs whose frequency of the lowest allele was less than 0.1 were eliminated, leaving 576,027, 577,189 and 576,675 markers for Pop1, Pop2 and Pop3 populations, respectively. The phenotypic variation was 1.0 and the variation of QTLs was 50% of the heritabilities, for each population. The mean LD for each chromosome, measured by the D' statistic, ranged from 0.20 to 0.30 for all populations in the last generation. Haplotypes were constructed using three methods: Confidence Interval (CI), Four Gametes Rule (FGR) and Sliding-Window (SW). For Pop1, on chromosome 15, CI, FGR and SW methods identified five, eight and seven QTLs, respectively. Only one QTL was identified on chromosomes 19 and 29. For the high heritability characteristic, a QTL was identified on chromosome 11. Regarding the association analyzes using individual SNPs, four QTLs were identified on chromosome 15. For the moderate heritability characteristic, no significant isolated haplotypes or SNPs were found. The methodology of haplotype formation based on the FGR was considered the most efficient for the detection of QTLs in relation to CI and SW methods, as well as to the use of isolated SNPs.
28

Uma proposta de implementação de um analisador de harmônicos variantes no tempo

Fabri, Diego Fagundes 03 March 2011 (has links)
Submitted by Renata Lopes (renatasil82@gmail.com) on 2016-12-21T11:00:37Z No. of bitstreams: 1 diegofagundesfabri.pdf: 7852736 bytes, checksum: 4ef1280e3a0eec15710a86fe8cad43ea (MD5) / Approved for entry into archive by Adriana Oliveira (adriana.oliveira@ufjf.edu.br) on 2016-12-22T12:36:18Z (GMT) No. of bitstreams: 1 diegofagundesfabri.pdf: 7852736 bytes, checksum: 4ef1280e3a0eec15710a86fe8cad43ea (MD5) / Made available in DSpace on 2016-12-22T12:36:18Z (GMT). No. of bitstreams: 1 diegofagundesfabri.pdf: 7852736 bytes, checksum: 4ef1280e3a0eec15710a86fe8cad43ea (MD5) Previous issue date: 2011-03-03 / CAPES - Coordenação de Aperfeiçoamento de Pessoal de Nível Superior / Neste trabalho, é realizada uma proposta de implementação de um analisador de harmônicos variantes no tempo para operação em tempo real, utilizando plataforma DSP TMS320F28027. O algoritmo utilizado para realização da decomposição harmônica é o da DFT de Janela Deslizante (Sliding-Window Recursive DFT). Este algoritmo é descrito completamente e são realizados estudos referentes à decomposição de sinais contendo inter-harmônicos ou variações na frequência fundamental, bem como são definidos parâmetros e uma estrutura para auxiliar na análise de presença de espalhamento espectral nas decomposições. Também é proposto um método para correção dos erros de amplitude e fase causados pelos filtros analógicos de entrada, a partir da manipulação dos componentes em quadratura da DFT. O algoritmo da DFT de Janela Deslizante é então implementado em plataforma DSP e são realizadas decomposições harmônicas de sinais reais através de um analisado protótipo proposto. Esta nova forma de análise no domínio do tempo dos harmônicos permite a observação e estudos de diversos fenômenos relacionados ao sistema de potência atual, de um novo ponto de vista. Além do estudo da DFT e do protótipo proposto, é realizado o desenvolvimento de uma nova estrutura de banco de filtros FIR QMF visando a decomposição harmônica. / This work proposes an implementation of a real-time time-varying harmonic analyzer using a DSP TMS320F28027 platform. The algorithm used to perform the harmonic decomposition is the Sliding-Window Recursive DFT. This algorithm is fully described, and studies are made concerning the decomposition of signals containing inter-harmonics and variations in their fundamental frequency. In addition to that, new parameters and an auxiliar structure are defined to assist the analysis of spillover presence in the signals decomposition. It is also proposed a method for correcting amplitude and phase errors caused by analog input filters, through the manipulation of the DFT quadrature components. The Sliding-Window DFT algorithm is then implemented in a DSP platform, and harmonic decompositions of real signals are performed using the proposed prototype. This new form of time-domain harmonics analysis allows the observation and study of various phenomena related to the power system from a new point of view. Besides the study of the DFT and the proposed prototype, a new filter bank structure using QMF FIR filter for harmonic decomposition is developed.
29

Approximation de fonctions et de données discrètes au sens de la norme L1 par splines polynomiales / Function and data approximation in L1 norm by polynomial splines

Gajny, Laurent 15 May 2015 (has links)
L'approximation de fonctions et de données discrètes est fondamentale dans des domaines tels que la planification de trajectoire ou le traitement du signal (données issues de capteurs). Dans ces domaines, il est important d'obtenir des courbes conservant la forme initiale des données. L'utilisation des splines L1 semble être une bonne solution au regard des résultats obtenus pour le problème d'interpolation de données discrètes par de telles splines. Ces splines permettent notamment de conserver les alignements dans les données et de ne pas introduire d'oscillations résiduelles comme c'est le cas pour les splines d'interpolation L2. Nous proposons dans cette thèse une étude du problème de meilleure approximation au sens de la norme L1. Cette étude comprend des développements théoriques sur la meilleure approximation L1 de fonctions présentant une discontinuité de type saut dans des espaces fonctionnels généraux appelés espace de Chebyshev et faiblement Chebyshev. Les splines polynomiales entrent dans ce cadre. Des algorithmes d'approximation de données discrètes au sens de la norme L1 par procédé de fenêtre glissante sont développés en se basant sur les travaux existants sur les splines de lissage et d'ajustement. Les méthodes présentées dans la littérature pour ces types de splines peuvent être relativement couteuse en temps de calcul. Les algorithmes par fenêtre glissante permettent d'obtenir une complexité linéaire en le nombre de données. De plus, une parallélisation est possible. Enfin, une approche originale d'approximation, appelée interpolation à delta près, est développée. Nous proposons un algorithme algébrique avec une complexité linéaire et qui peut être utilisé pour des applications temps réel. / Data and function approximation is fundamental in application domains like path planning or signal processing (sensor data). In such domains, it is important to obtain curves that preserve the shape of the data. Considering the results obtained for the problem of data interpolation, L1 splines appear to be a good solution. Contrary to classical L2 splines, these splines enable to preserve linearities in the data and to not introduce extraneous oscillations when applied on data sets with abrupt changes. We propose in this dissertation a study of the problem of best L1 approximation. This study includes developments on best L1 approximation of functions with a jump discontinuity in general spaces called Chebyshev and weak-Chebyshev spaces. Polynomial splines fit in this framework. Approximation algorithms by smoothing splines and spline fits based on a sliding window process are introduced. The methods previously proposed in the littérature can be relatively time consuming when applied on large datasets. Sliding window algorithm enables to obtain algorithms with linear complexity. Moreover, these algorithms can be parallelized. Finally, a new approximation approach with prescribed error is introduced. A pure algebraic algorithm with linear complexity is introduced. This algorithm is then applicable to real-time application.
30

Transducer-based Algorithmic Verification of Retransmission Protocols over Noisy Channels

Thakkar, Jay January 2013 (has links) (PDF)
Unreliable communication channels are a practical reality. They add to the complexity of protocol design and verification. In this work, we consider noisy channels which can corrupt messages. We present an approach to model and verify protocols which combine error detection and error control to provide reliable communication over noisy channels. We call these protocols retransmission protocols as they achieve reliable communication through repeated retransmissions of messages. These protocols typically use cyclic redundancy checks and sliding window protocols for error detection and control respectively. We propose models of these protocols as regular transducers operating on bit strings. Deterministic streaming string transducers provide a natural way of modeling these protocols and formalizing correctness requirements. The verification problem is posed as functional equivalence between the protocol transducer and the specification transducer. Functional equivalence checking is decidable for this class of transducers and this makes the transducer models amenable to algorithmic verification. In our transducer models, message lengths and retransmission rounds are unbounded. We present case studies based on TinyOS serial communication and the HDLC retransmission protocol. We further extend our protocol models to capture the effects of a noisy channel with non-determinism. We present two non-deterministic yet decidable extensions of transducer models of retransmission protocols. For one of our models, we achieve decidable verification by bounding the retransmission rounds, whereas for the other, even retransmission rounds are unbounded.

Page generated in 0.0557 seconds