Allen, Matthew S
20 August 2014
"This thesis presents technologies that integrate field programmable gate arrays (FPGAs), model-driven design tools, and software-defined radios (SDRs). Specifically, an assessment of current state-of-the-art practices applying model-driven development techniques targeting SDR systems is conducted. FPGAs have become increasingly versatile computing devices due to their size and resource enhancements, advanced core generation, partial reconfigurability, and system-on-a-chip (SoC) implementations. Although FPGAs possess relatively better performance per watt when compared to central processing units (CPUs) or graphics processing units (GPUs), FPGAs have been avoided due to long development cycles and higher implementation costs due to significant learning curves and low levels of abstraction associated with the hardware description languages (HDLs). This thesis conducts a performance assessment of SDR designs using both a model-driven design approach developed with Mathworks HDL Coder and a hand-optimized design approach created from the model-driven VHDL. Each design was implemented on the FPGA fabric of a Zynq-7000 SoC, using a Zedboard evaluation platform for hardware verification. Furthermore, a set of guidelines and best practices for applying model-driven design techniques toward the development of SDR systems using HDL Coder is presented."
The contents of this thesis is a delineation of the European Standard ETSI EN 300 744 for terrestrial digital video broadcasting (DVB-T) and a description of created OFDM coder and decoder for baseband signal transmission in 2K mode without error correction capabilities. The proper function of both devices is verified by means of Matlab simulations and practically implemented into Texas Instruments’ digital signal processor TMS320C6711 using Starter Kits.
Zuchowski Filho, Edmundo
2010 October 1914
O presente trabalho apresenta uma análise de viabilidade do emprego de um fluxo de controle sintético VoIP para inferir sobre a performance de fluxos individuais de um fluxo agregado pertencente a um EF PHB em uma rede de serviços diferenciados. A abordagem proposta visa estabelecer através de simples verificação de performance quanto ao atendimento do SLA relacionado a alguns requisitos do fluxo VoIP. Os resultados poderão ser utilizados para alimentar especificações e requisitos para o projeto de ferramentas, por exemplo, para capacitar atividades de planejamento e ações de gerência de rede. O tráfego VoIP foi classificado como homogêneo (todos os pacotes do fluxo são criados pelo mesmo tipo de codec) e como heterogêneo (pacotes originados por mais de um tipo de codec) durante a realização dos experimentos. Os experimentos verificaram a hipótese de que a performance do fluxo de controle possa ser relacionada de alguma forma com a performance dos fluxos individuais de um fluxo agregado sob as suposições e métricas definidas. As métricas retardo, jitter e perda de pacotes foram estimadas por simulação tanto para o tráfego homogêneo quanto para o tráfego heterogêneo, em diversas condições de carga controlada. Os resultados permitem concluir quanto a viabilidade da abordagem para estimar o retardo e com limitações de confiança, quanto ao jitter, dependendo do tipo de tráfego (heterogêneo) e tipo de codec. / This work presents a viability analysis of the use of a synthetic VoIP control flow to infer about the performance of individual flows of a flow aggregate belonging to an EF PHB in a DiffServ network. The proposed approach aims to establish a simple performance verification of SLA accomplishment related to the some of the VoIP flow requirements. The results should be used to feed requirements specifications for tool design, for example, to capacity planning activities and management actions. We classify the VoIP traffic as homogeneous (all flow packets created by a same codec type) and heterogeneous (packets originated from more than one codec type) to carry out the experiments. The experiments checked the hypothesis that the control flow performance can be somehow related to the performance of individual flows of a flow aggregate under the agreed assumptions and metrics. The metrics one-way delay, jitter and packet loss were evaluated by simulation for both homogeneous and heterogeneous traffic at several network-controlled load. The results let us conclude about the viability of the approach to evaluate one-way delay and with confidence limitations, also the jitter, depending on the traffic type (heterogeneous) and codec type.
Fine Granularity Video Compression Technique and Its Application to Robust Video Transmission over Wireless InternetSu, Yih-ching 22 December 2003 (has links)
This dissertation deals with (a) fine granularity video compression technique and (b) its application to robust video transmission over wireless Internet. First, two wavelet-domain motion estimation algorithms, HMRME (Half-pixel Multi-Resolution Motion Estimation) and HSDD (Hierarchical Sum of Double Difference Metric), have been proposed to give wavelet-based FGS (Fine Granularity Scalability) video encoder with either low-complexity or high-performance features. Second, a VLSI-friendly high-performance embedded coder ABEC (Array-Based Embedded Coder) has been built to encode motion compensation residue as bitstream with fine granularity scalability. Third, the analysis of loss-rate prediction over Gilbert channel with loss-rate feedback, and several optimal FEC (Forward Error Correction) assignment schemes applicable for any real-time FGS video transmission system will be presented in this dissertation. In addition to those theoretical works mentioned above, for future study on embedded systems for wireless FGS video transmission, an initiative FPGA-based MPEG-4 video encoder has also been implemented in this work.
The initial part of my work is dedicated to overall introduction into the transmission systems and its categories. Further the work is focused on dividing of security codes which are used for transmission systems. In the next chapter is analyzed the problems of error origin, mathematical transcription of errors, categorizing of errors which can generate during transmission. The following chapter deals with description of convolutional codes describing security against errors, principle of serial consecution interpretation on parallastic consecution and visa versa. Here is outlined problems of convolutional codes input. In the next part are mentioned three convolution codes which are used for security against burst errors. In the next chapter is mentioned problems of message interleaving and description of used methods, how they originate and detailed description, how the errors are eliminated with the help of long segment interleaving. In the next chapter are described techniques which are used for clustered errors suppression. The last chapter is dedicated to the practical part of my thesis. There are detail descriptions and simulation techniques how the security of non-secure section is developed, transmission on line, burst errors induction on transmitted secure data and consecutive amendment or clustered errors division in transceiver. For the simulation there were utilized three convolutional codes for clustered errors amendment and two interleaving techniques.
Epps, Julien, Electrical Engineering & Telecommunications, Faculty of Engineering, UNSW
Most existing telephone networks transmit narrowband coded speech which has been bandlimited to 4 kHz. Compared with normal speech, this speech has a muffled quality and reduced intelligibility, which is particularly noticeable in sounds such as /s/, /f/ and /sh/. Speech which has been bandlimited to 8 kHz is often coded for this reason, but this requires an increase in the bit rate. Wideband enhancement is a scheme that adds a synthesized highband signal to narrowband speech to produce a higher quality wideband speech signal. The synthesized highband signal is based entirely on information contained in the narrowband speech, and is thus achieved at zero increase in the bit rate from a coding perspective. Wideband enhancement can function as a post-processor to any narrowband telephone receiver, or alternatively it can be combined with any narrowband speech coder to produce a very low bit rate wideband speech coder. Applications include higher quality mobile, teleconferencing, and internet telephony. This thesis examines in detail each component of the wideband enhancement scheme: highband excitation synthesis, highband envelope estimation, and narrowband-highband envelope continuity. Objective and subjective test measures are formulated to assess existing and new methods for all components, and the likely limitations to the performance of wideband enhancement are also investigated. A new method for highband excitation synthesis is proposed that uses a combination of sinusoidal transform coding-based excitation and random excitation. Several new techniques for highband spectral envelope estimation are also developed. The performance of these techniques is shown to be approaching the limit likely to be achieved. Subjective tests demonstrate that wideband speech synthesized using these techniques has higher quality than the input narrowband speech. Finally, a new paradigm for very low bit rate wideband speech coding is presented in which the quality of the wideband enhancement scheme is improved further by allocating a very small bitstream for highband envelope and gain coding. Thus, this thesis demonstrates that wideband speech can be communicated at or near the bit rate of a narrowband speech coder.
Igboayaka, Jane-Vivian Chinelo Ezinne
This research examines the confluence of consumers’ use of social media to share information with the ever-present need for innovative research that yields insight into consumers’ economic decisions. Social media networks have become ubiquitous in the new millennium. These networks, including, among others: Facebook, Twitter, Blog, and Reddit, are brimming with conversations on an expansive array of topics between people, private and public organizations, governments and global institutions. Preliminary findings from initial research confirms the existence of online conversations and posts related to matters of personal finance and consumers’ economic outlook. Meanwhile, the Consumer Confidence Index (CCI) continues to make headline news. The issue of consumer confidence (or sentiment) in anticipating future economic activity generates significant interest from major players in the news media industry, who scrutinize its every detail and report its implications for key players in the economy. Though the CCI originated in the United States in 1946, variants of the survey are now used to track and measure consumer confidence in nations worldwide. In light of the fact that the CCI is a quantified representation of consumer sentiments, it is possible that the level of confidence consumers have in the economy could be deduced by tracking the sentiments or opinions they express in social media posts. Systematic study of these posts could then be transformed into insights that could improve the accuracy of an index like the CCI. Herein lies the focus of the current research—to analyze the attributes of data from social media posts, in order to assess their capacity to generate insights that are novel and/or complementary to traditional CCI methods. The link between data gained from social media and the survey-based CCI is perhaps not an obvious one. But our research will use a data extraction tool called NetBase Insight Workbench to mine data from the social media networks and then apply natural language processing to analyze the social media content. Also, KH Coder software will be used to perform a set of statistical analyses on samples of social media posts to examine the co-occurrence and clustering of words. The findings will be used to expose the strengths and weaknesses of the data and to assess the validity and cohesion of the NetBase data extraction tool and its suitability for future research. In conclusion, our research findings support the analysis of opinions expressed in social media posts as a complement to traditional survey-based CCI approaches. Our findings also identified a key weakness with regards to the degree of ‘noisiness’ of the data. Although this could be attributed to the ‘modeling’ error of the data mining tool, there is room for improvement in the area of association—of discerning the context and intention of posts in online conversations.
03 August 2009
Η εργασία αφορά ένα ειδικό είδος κωδικοποίησης εντοπισμού και διόρθωσης λαθών, την κωδικοποίση Reed-Solomon. Οι κώδικες αυτού του είδους χρησιμοποιούνται σε τηλεπικοινωνιακές εφαρμογές (ενσύρματη τηλεφωνία, ψηφιακή τηλεόραση, ευρυζωνικές ασύρματες επικοινωνίες) και σε συστήματα ψηφιακής αποθήκευσης (οπτικοί, μαγνητικοί δίσκοι). Η κωδικοποίηση Reed-Solomon βασίζεται σε μία ειδική κατηγορία αριθμητικών πεδίων τα πεδία Galois (Galois Field). Στα πλαίσια της εργασίας πραγματοποιήθηκε μελέτη των ιδιοτήτων των πεδίων Galois. και σχεδιάστηκε κωδικοποιητής-αποκωδικοποιητής για κώδικες Reed Solomon. Η σχεδίαση υλοποιήθηκε σε υλικό (hardware) σε γλώσσα Verilog HDL. Η σύνθεση των κυκλωμάτων πραγματοποιήθηκε με τεχνολογία Πεδίων Προγραμματιζόμενων Πινάκων Πυλών (τεχνολογία FPGA) και τεχνολογία Ολοκληρωμένων Κυκλωμάτων Ειδικού Σκοπού (τεχνολογία ASIC). Ακολουθήθηκε η μεθοδολογία σχεδιασμού Μονάδων Διανοητικής Ιδιοκτησίας για ολοκληρωμένα κυκλώματα (IP core), σύμφωνα με την οποία η σχεδίαση είναι ανεξάρτητη της πλατφόμας υλοποίησης και μπορεί να υλοποιηθεί με καθόλου ή ελάχιστες αλλαγές σε διαφορετικές τεχνολογίες. Η έννοια των IP core βρίσκει ιδιαίτερη εφαρμογή σε Συστήματα σε Ολοκληρωμένα Κυκλώματα (System on Chip). / The present work is about a specific group of error detection and correction codes, the Reed-Solomon codes. Such codes are used in telecommunications applications (wire telephony, digital television, broadband wireless communications) and digital storage systems (optical, magnetic disks). The Reed Solomon codes are based on a specific category of numerical fields, called Galois Fields. The Work consists of the study of the properties of Galois fields and of the design of an codec for Reed Solomon codes. The design was implemented in hardware with the use of Verilog HDL language. The synthesis of the circuit targets Field programmable Gate Array (FPGA) and Applications Specific Integrated Circuit (ASIC) technologies. The design methodology for Intellectual Property Units for integrated circuits (IP cores) was used. According to that methodology the design is platform independent and consequently the implementation can be achieved with minimal or no changes in different technologies. The IP cores model is widely applied in Systems on Integrated Circuits (System on Chips).
Compression, analyse et visualisation des signaux physiologiques (EEG) appliqués à la télémedecine / Compression, analysis and visualization of EEG signals applied to telemedicineDhif, Imen 13 December 2017 (has links)
En raison de la grande quantité d’EEG acquise sur plusieurs journées, une technique de compression efficace est nécessaire. Le manque des experts et la courte durée des crises encouragent la détection automatique des convulsions. Un affichage uniforme est obligatoire pour assurer l’interopérabilité et la lecture des examens EEG transmis. Le codeur certifié médical WAAVES fournit des CR élevés et assure une qualité de diagnostic d’image. Durant nos travaux, trois défis sont révélés : adapter WAAVES à la compression des signaux, détecter automatiquement les crises épileptiques et assurer l’interopérabilité des afficheurs EEG. L’étude du codeur montre qu’il est incapable de supprimer la corrélation spatiale et de compresser des signaux monodimensionnels. Par conséquent, nous avons appliqué l’ICA pour décorréler les signaux, la mise en échelle pour redimensionner les valeurs décimales et la construction d’image. Pour garder une qualité de diagnostic avec un PDR inférieur à 7%, nous avons codé le résidu. L’algorithme de compression EEGWaaves proposé a atteint des CR de l’ordre de 56. Ensuite, nous avons proposé une méthode d’extraction des caractéristiques des signaux EEG basée sur un nouveau modèle de calcul de la prédiction énergétique (EAM) des signaux. Ensuite, des paramètres statistiques ont été calculés et les Réseaux de Neurones ont été appliqués pour détecter les crises épileptiques. Cette méthode nous a permis d’atteindre de meilleure sensibilité allant jusqu’à 100% et une précision de 99.44%. Le dernier chapitre détaille le déploiement de notre afficheur multi-plateforme des signaux physiologiques. Il assure l’interopérabilité des examens EEG entre les hôpitaux. / Due to the large amount of EEG acquired over several days, an efficient compression technique is necessary. The lack of experts and the short duration of epileptic seizures require the automatic detection of these seizures. Furthermore, a uniform viewer is mandatory to ensure interoperability and a correct reading of transmitted EEG exams. The certified medical image WAAVES coder provides high compression ratios CR while ensuring image quality. During our thesis, three challenges are revealed : adapting WAAVES coder to the compression of the EEG signals, detecting automatically epileptic seizures in an EEG signal and ensure the interoperability of the displays of EEG exams. The study of WAAVES shows that this coder is unable to remove spatial correlation and to compress directly monodimensional signals. Therefore, we applied ICA to decorrelate signals, a scaling to resize decimal values, and image construction. To keep a diagnostic quality with a PDR less than 7%, we coded the residue. The proposed compression algorithm EEGWaaves has achieved CR equal to 56. Subsequently, we proposed a new method of EEG feature extraction based on a new calculation model of the energy expected measurement (EAM) of EEG signals. Then, statistical parameters were calculated and Neural Networks were applied to classify and detect epileptic seizures. Our method allowed to achieve a better sensitivity up to 100% and an accuracy of 99.44%. The last chapter details the deployment of our multiplatform display of physiological signals by meeting the specifications established by doctors. The main role of this software is to ensure the interoperability of EEG exams between healthcare centers.
Zabezpečení přenosu dat proti dlouhým shlukům chyb / Protection of data transmission against long error burstsMalach, Roman January 2008 (has links)
This Master´s thesis discuses the protection of data transmission against long error bursts. The data is transmited throught the channel with defined error rate. Parameters of the channel are error-free interval 2000 bits and length of burst error 250 bits. One of the aims of this work is to make a set of possible methods for the realization of a system for data correction. The basic selection is made from most known codes. These codes are divided into several categories and then the best one is chosen for higher selection. Of course interleaving is used too. Only one code from each category can pass on to the higher level of the best code selection. At the end the codes are compared and the best three are simulated using the Matlab program to check correct function. From these three options, one is chosen as optimal regarding practical realization. Two options exist, hardware or software realization. The second one would seem more useful. The real codec is created in validator language C. Nowadays, considering language C and from a computer architecture point of view the 8 bits size of element unit is convenient. That´s why the code RS(255, 191), which works with 8 bits symbols, is optimal. In the next step the codec of this code is created containing the coder and decoder of the code above. The simulation of error channel is ensured by last program. Finally the results are presented using several examples.
Page generated in 0.0455 seconds