• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 7
  • 6
  • 4
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 25
  • 6
  • 5
  • 5
  • 5
  • 5
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Performance Assessment of Model-Driven FPGA-based Software-Defined Radio Development

Allen, Matthew S 20 August 2014 (has links)
"This thesis presents technologies that integrate field programmable gate arrays (FPGAs), model-driven design tools, and software-defined radios (SDRs). Specifically, an assessment of current state-of-the-art practices applying model-driven development techniques targeting SDR systems is conducted. FPGAs have become increasingly versatile computing devices due to their size and resource enhancements, advanced core generation, partial reconfigurability, and system-on-a-chip (SoC) implementations. Although FPGAs possess relatively better performance per watt when compared to central processing units (CPUs) or graphics processing units (GPUs), FPGAs have been avoided due to long development cycles and higher implementation costs due to significant learning curves and low levels of abstraction associated with the hardware description languages (HDLs). This thesis conducts a performance assessment of SDR designs using both a model-driven design approach developed with Mathworks HDL Coder and a hand-optimized design approach created from the model-driven VHDL. Each design was implemented on the FPGA fabric of a Zynq-7000 SoC, using a Zedboard evaluation platform for hardware verification. Furthermore, a set of guidelines and best practices for applying model-driven design techniques toward the development of SDR systems using HDL Coder is presented."
2

Uma abordagem para análise de desempenho de fluxos VoIP em redes de serviços diferenciados

Zuchowski Filho, Edmundo 2010 October 1914 (has links)
O presente trabalho apresenta uma análise de viabilidade do emprego de um fluxo de controle sintético VoIP para inferir sobre a performance de fluxos individuais de um fluxo agregado pertencente a um EF PHB em uma rede de serviços diferenciados. A abordagem proposta visa estabelecer através de simples verificação de performance quanto ao atendimento do SLA relacionado a alguns requisitos do fluxo VoIP. Os resultados poderão ser utilizados para alimentar especificações e requisitos para o projeto de ferramentas, por exemplo, para capacitar atividades de planejamento e ações de gerência de rede. O tráfego VoIP foi classificado como homogêneo (todos os pacotes do fluxo são criados pelo mesmo tipo de codec) e como heterogêneo (pacotes originados por mais de um tipo de codec) durante a realização dos experimentos. Os experimentos verificaram a hipótese de que a performance do fluxo de controle possa ser relacionada de alguma forma com a performance dos fluxos individuais de um fluxo agregado sob as suposições e métricas definidas. As métricas retardo, jitter e perda de pacotes foram estimadas por simulação tanto para o tráfego homogêneo quanto para o tráfego heterogêneo, em diversas condições de carga controlada. Os resultados permitem concluir quanto a viabilidade da abordagem para estimar o retardo e com limitações de confiança, quanto ao jitter, dependendo do tipo de tráfego (heterogêneo) e tipo de codec. / This work presents a viability analysis of the use of a synthetic VoIP control flow to infer about the performance of individual flows of a flow aggregate belonging to an EF PHB in a DiffServ network. The proposed approach aims to establish a simple performance verification of SLA accomplishment related to the some of the VoIP flow requirements. The results should be used to feed requirements specifications for tool design, for example, to capacity planning activities and management actions. We classify the VoIP traffic as homogeneous (all flow packets created by a same codec type) and heterogeneous (packets originated from more than one codec type) to carry out the experiments. The experiments checked the hypothesis that the control flow performance can be somehow related to the performance of individual flows of a flow aggregate under the agreed assumptions and metrics. The metrics one-way delay, jitter and packet loss were evaluated by simulation for both homogeneous and heterogeneous traffic at several network-controlled load. The results let us conclude about the viability of the approach to evaluate one-way delay and with confidence limitations, also the jitter, depending on the traffic type (heterogeneous) and codec type.
3

Realizace OFDM kodéru pro potřeby DVB-T / Realization of OFDM coder for DVB-T system

Zelinka, Petr January 2008 (has links)
The contents of this thesis is a delineation of the European Standard ETSI EN 300 744 for terrestrial digital video broadcasting (DVB-T) and a description of created OFDM coder and decoder for baseband signal transmission in 2K mode without error correction capabilities. The proper function of both devices is verified by means of Matlab simulations and practically implemented into Texas Instruments’ digital signal processor TMS320C6711 using Starter Kits.
4

Fine Granularity Video Compression Technique and Its Application to Robust Video Transmission over Wireless Internet

Su, Yih-ching 22 December 2003 (has links)
This dissertation deals with (a) fine granularity video compression technique and (b) its application to robust video transmission over wireless Internet. First, two wavelet-domain motion estimation algorithms, HMRME (Half-pixel Multi-Resolution Motion Estimation) and HSDD (Hierarchical Sum of Double Difference Metric), have been proposed to give wavelet-based FGS (Fine Granularity Scalability) video encoder with either low-complexity or high-performance features. Second, a VLSI-friendly high-performance embedded coder ABEC (Array-Based Embedded Coder) has been built to encode motion compensation residue as bitstream with fine granularity scalability. Third, the analysis of loss-rate prediction over Gilbert channel with loss-rate feedback, and several optimal FEC (Forward Error Correction) assignment schemes applicable for any real-time FGS video transmission system will be presented in this dissertation. In addition to those theoretical works mentioned above, for future study on embedded systems for wireless FGS video transmission, an initiative FPGA-based MPEG-4 video encoder has also been implemented in this work.
5

Metody prokládání zprávy / Methods of interleaving data

Soudek, Michal January 2008 (has links)
The initial part of my work is dedicated to overall introduction into the transmission systems and its categories. Further the work is focused on dividing of security codes which are used for transmission systems. In the next chapter is analyzed the problems of error origin, mathematical transcription of errors, categorizing of errors which can generate during transmission. The following chapter deals with description of convolutional codes describing security against errors, principle of serial consecution interpretation on parallastic consecution and visa versa. Here is outlined problems of convolutional codes input. In the next part are mentioned three convolution codes which are used for security against burst errors. In the next chapter is mentioned problems of message interleaving and description of used methods, how they originate and detailed description, how the errors are eliminated with the help of long segment interleaving. In the next chapter are described techniques which are used for clustered errors suppression. The last chapter is dedicated to the practical part of my thesis. There are detail descriptions and simulation techniques how the security of non-secure section is developed, transmission on line, burst errors induction on transmitted secure data and consecutive amendment or clustered errors division in transceiver. For the simulation there were utilized three convolutional codes for clustered errors amendment and two interleaving techniques.
6

Zielsystemunabhängiger Modellbasierter Entwurf auf der Basis von MATLAB/Simulink® mit einem vollständigen Model in the Loop-Test und automatischer Code-Generierung für einen Mischprozess

Büchau, Bernd, Gröbe, Gerald 27 January 2022 (has links)
Auf der Basis eines modellbasierten Entwurfs mit MATLAB/Simulink® wird der Entwurf für einen Mischprozess einer Mischstation vorgenommen. Durch den Einsatz der Toolbox PLC Coder™ von Mathworks® kann dieser Entwurf ohne jegliche Änderungen für aktuell 12 Automatisierungssysteme verschiedener Hersteller verwendet werden, so dass dies als ein zielsystemunabhängiger Entwurf angesehen werden kann. Der Vorteil besteht darin, dass nicht wie bisher ein klassischer Entwurf des Systems entsprechend IEC 61131 Teil 3 mit einer entsprechenden hersteller- bzw. systemspezifischen Entwicklungsumgebung durchgeführt wird, sondern ein hersteller- und systemunabhängiger Entwurf mit den zur Verfügung stehenden Werkzeugen von Simulink vorgenommen werden kann. Mit der verwendeten Toolbox PLC Coder™ für MATLAB/ Simulink® wird strukturierter Text nach IEC 61131 Teil 3 für das jeweilige Automatisierungssystem automatisch generiert. Im vorliegenden Fall wird als Automatisierungssystem eine SIMATIC ET 200SP Open Controller mit entsprechenden Erweiterungsmodulen (digitale und analoge Ein- und Ausgänge usw.) und das TIA Portal in der Version 15.1 der Fa. Siemens eingesetzt. / Für den Mischprozess, eine Mischstation, werden Flüssigkeiten aus drei verschiedenen Behältern in verschiedenen vorgegebenen Mischverhältnissen und Mengen dosiert und in einen Mischbehälter ge-leitet. Hierfür wird der modellbasierte Entwurf primär mittels endlicher Zustandsautomaten mit einer unterlagerten Durchflussmengenregelung in insgesamt neun Subsystemen vorgenommen. Um eine möglichst hohe Genauigkeit des Mischproduktes zu erzielen, wird hierbei die Dosierung der verschie-denen Flüssigkeiten volumengesteuert vorgenommen. Für den vollständigen modellbasierten Entwurf der Automatisierung der Mischstation, dessen Visua-lisierung sowie der Modellierung des HMI mittels Apps des App Designers von MATLAB/ Simulink® wird ein vollständiger Model in the Loop (MIL)-Test in Realzeit zur Verifikation des Gesamtsystems durchgeführt, der hier den Schwerpunkt darstellt. Durch den auf einem deutlich höheren Abstraktionsniveau durchgeführten modellbasierten Entwurf und den MIL-Test werden die Implementierungs- und Inbetriebnahmephase minimal. Abschließend werden die wesentlichen Vorteile des modellbasierten Entwurfs behandelt und ein Aus-blick gegeben.
7

Evaluation of high-level synthesis tools for generation of Verilog code from MATLAB based environments

Bäck, Carl January 2020 (has links)
FPGAs are of interest in the signal processing domain as they provide the opportunity to run algorithms at very high speed. One possible use case is to sort incoming data in a measurement system, using e.g. a histogram method. Developing code for FPGA applications usually requires knowledge about special languages, which are not common knowledge in the signal processing domain. High-level synthesis is an approach where high-level languages, as MATLAB or C++, can be used together with a code generation tool, to directly generate an FPGA ready output. This thesis uses the development of a histogram as a test case to investigate the efficiency of three different tools, HDL Coder in MATLAB, HDL Coder in Simulink and System Generator for DSP in comparison to the direct development of the same histogram in Vivado using Verilog. How to write and structure code in these tools for proper functionality was also examined. It has been found that all tools deliver an operation frequency comparable to a direct implementation in Verilog, decreased resource usage, a development time which decreased by 27% (HDL Coder in MATLAB), 45% (System Generator) and 64% (HDL Coder in Simulink) but at the cost of increased power consumption. Instructions for how to use all three tools has been collected and summarised. / I ingångssteget på ett mätsystem är det av intresse att använda en FPGA för att uppnå höga hastigheter på de oundvikliga datafiltrering och sorterings algoritmer som körs. Ett problem med FPGAer är att utvecklingen ställer höga krav på specifik kunskap gällande utvecklingsspråk och miljöer vilket för en person specialiserad inom t.ex. signalbehandling kan saknas helt. HLS är en metodik där högnivåspråk kan användas för digital design genom att nyttja ett verktyg för automatgenerering av kod. I detta arbete har utveckling av ett histogram använts som testfall för att utvärdera effektivitet samt designmetodik av tre olika HLS verktyg, HDL Coder till MATLAB, HDL Coder till Simulink och System Generator for DSP. Utvecklingen i dessa verktyg har jämförts mot utvecklingen av samma histogram i Vivado, där språket Verilog använts. Arbetets slutsater är att samtliga verktyg som testats leverar en arbetsfrekvens som är jämförbar med att skriva histogrammet direkt i Verilog, en minskad resursanvändning, utvecklingstid som minskat med 27% (HDL Coder i MATLAB), 45% (System Generator) och 64% (HDL Coder i Simulink) men med en ökad strömförbrukning. En sammanställning av instruktioner för utveckling med hjälp av verktygen har även gjorts.
8

Wideband extension of narrowband speech for enhancement and coding

Epps, Julien, Electrical Engineering & Telecommunications, Faculty of Engineering, UNSW January 2000 (has links)
Most existing telephone networks transmit narrowband coded speech which has been bandlimited to 4 kHz. Compared with normal speech, this speech has a muffled quality and reduced intelligibility, which is particularly noticeable in sounds such as /s/, /f/ and /sh/. Speech which has been bandlimited to 8 kHz is often coded for this reason, but this requires an increase in the bit rate. Wideband enhancement is a scheme that adds a synthesized highband signal to narrowband speech to produce a higher quality wideband speech signal. The synthesized highband signal is based entirely on information contained in the narrowband speech, and is thus achieved at zero increase in the bit rate from a coding perspective. Wideband enhancement can function as a post-processor to any narrowband telephone receiver, or alternatively it can be combined with any narrowband speech coder to produce a very low bit rate wideband speech coder. Applications include higher quality mobile, teleconferencing, and internet telephony. This thesis examines in detail each component of the wideband enhancement scheme: highband excitation synthesis, highband envelope estimation, and narrowband-highband envelope continuity. Objective and subjective test measures are formulated to assess existing and new methods for all components, and the likely limitations to the performance of wideband enhancement are also investigated. A new method for highband excitation synthesis is proposed that uses a combination of sinusoidal transform coding-based excitation and random excitation. Several new techniques for highband spectral envelope estimation are also developed. The performance of these techniques is shown to be approaching the limit likely to be achieved. Subjective tests demonstrate that wideband speech synthesized using these techniques has higher quality than the input narrowband speech. Finally, a new paradigm for very low bit rate wideband speech coding is presented in which the quality of the wideband enhancement scheme is improved further by allocating a very small bitstream for highband envelope and gain coding. Thus, this thesis demonstrates that wideband speech can be communicated at or near the bit rate of a narrowband speech coder.
9

Σχεδίαση κωδικοποιητή-αποκωδικοποιητή Reed-Solomon

Ρούδας, Θεόδωρος 03 August 2009 (has links)
Η εργασία αφορά ένα ειδικό είδος κωδικοποίησης εντοπισμού και διόρθωσης λαθών, την κωδικοποίση Reed-Solomon. Οι κώδικες αυτού του είδους χρησιμοποιούνται σε τηλεπικοινωνιακές εφαρμογές (ενσύρματη τηλεφωνία, ψηφιακή τηλεόραση, ευρυζωνικές ασύρματες επικοινωνίες) και σε συστήματα ψηφιακής αποθήκευσης (οπτικοί, μαγνητικοί δίσκοι). Η κωδικοποίηση Reed-Solomon βασίζεται σε μία ειδική κατηγορία αριθμητικών πεδίων τα πεδία Galois (Galois Field). Στα πλαίσια της εργασίας πραγματοποιήθηκε μελέτη των ιδιοτήτων των πεδίων Galois. και σχεδιάστηκε κωδικοποιητής-αποκωδικοποιητής για κώδικες Reed Solomon. Η σχεδίαση υλοποιήθηκε σε υλικό (hardware) σε γλώσσα Verilog HDL. Η σύνθεση των κυκλωμάτων πραγματοποιήθηκε με τεχνολογία Πεδίων Προγραμματιζόμενων Πινάκων Πυλών (τεχνολογία FPGA) και τεχνολογία Ολοκληρωμένων Κυκλωμάτων Ειδικού Σκοπού (τεχνολογία ASIC). Ακολουθήθηκε η μεθοδολογία σχεδιασμού Μονάδων Διανοητικής Ιδιοκτησίας για ολοκληρωμένα κυκλώματα (IP core), σύμφωνα με την οποία η σχεδίαση είναι ανεξάρτητη της πλατφόμας υλοποίησης και μπορεί να υλοποιηθεί με καθόλου ή ελάχιστες αλλαγές σε διαφορετικές τεχνολογίες. Η έννοια των IP core βρίσκει ιδιαίτερη εφαρμογή σε Συστήματα σε Ολοκληρωμένα Κυκλώματα (System on Chip). / The present work is about a specific group of error detection and correction codes, the Reed-Solomon codes. Such codes are used in telecommunications applications (wire telephony, digital television, broadband wireless communications) and digital storage systems (optical, magnetic disks). The Reed Solomon codes are based on a specific category of numerical fields, called Galois Fields. The Work consists of the study of the properties of Galois fields and of the design of an codec for Reed Solomon codes. The design was implemented in hardware with the use of Verilog HDL language. The synthesis of the circuit targets Field programmable Gate Array (FPGA) and Applications Specific Integrated Circuit (ASIC) technologies. The design methodology for Intellectual Property Units for integrated circuits (IP cores) was used. According to that methodology the design is platform independent and consequently the implementation can be achieved with minimal or no changes in different technologies. The IP cores model is widely applied in Systems on Integrated Circuits (System on Chips).
10

Using Social Media Networks for Measuring Consumer Confidence: Problems, Issues and Prospects

Igboayaka, Jane-Vivian Chinelo Ezinne January 2015 (has links)
This research examines the confluence of consumers’ use of social media to share information with the ever-present need for innovative research that yields insight into consumers’ economic decisions. Social media networks have become ubiquitous in the new millennium. These networks, including, among others: Facebook, Twitter, Blog, and Reddit, are brimming with conversations on an expansive array of topics between people, private and public organizations, governments and global institutions. Preliminary findings from initial research confirms the existence of online conversations and posts related to matters of personal finance and consumers’ economic outlook. Meanwhile, the Consumer Confidence Index (CCI) continues to make headline news. The issue of consumer confidence (or sentiment) in anticipating future economic activity generates significant interest from major players in the news media industry, who scrutinize its every detail and report its implications for key players in the economy. Though the CCI originated in the United States in 1946, variants of the survey are now used to track and measure consumer confidence in nations worldwide. In light of the fact that the CCI is a quantified representation of consumer sentiments, it is possible that the level of confidence consumers have in the economy could be deduced by tracking the sentiments or opinions they express in social media posts. Systematic study of these posts could then be transformed into insights that could improve the accuracy of an index like the CCI. Herein lies the focus of the current research—to analyze the attributes of data from social media posts, in order to assess their capacity to generate insights that are novel and/or complementary to traditional CCI methods. The link between data gained from social media and the survey-based CCI is perhaps not an obvious one. But our research will use a data extraction tool called NetBase Insight Workbench to mine data from the social media networks and then apply natural language processing to analyze the social media content. Also, KH Coder software will be used to perform a set of statistical analyses on samples of social media posts to examine the co-occurrence and clustering of words. The findings will be used to expose the strengths and weaknesses of the data and to assess the validity and cohesion of the NetBase data extraction tool and its suitability for future research. In conclusion, our research findings support the analysis of opinions expressed in social media posts as a complement to traditional survey-based CCI approaches. Our findings also identified a key weakness with regards to the degree of ‘noisiness’ of the data. Although this could be attributed to the ‘modeling’ error of the data mining tool, there is room for improvement in the area of association—of discerning the context and intention of posts in online conversations.

Page generated in 0.0964 seconds