131 |
Human Activity Recognition Based on Transfer LearningPang, Jinyong 06 July 2018 (has links)
Human activity recognition (HAR) based on time series data is the problem of classifying various patterns. Its widely applications in health care owns huge commercial benefit. With the increasing spread of smart devices, people have strong desires of customizing services or product adaptive to their features. Deep learning models could handle HAR tasks with a satisfied result. However, training a deep learning model has to consume lots of time and computation resource. Consequently, developing a HAR system effectively becomes a challenging task. In this study, we develop a solid HAR system using Convolutional Neural Network based on transfer learning, which can eliminate those barriers.
|
132 |
Parallel-Node Low-Density Parity-Check Convolutional Code Encoder and Decoder ArchitecturesBrandon, Tyler 06 1900 (has links)
We present novel architectures for parallel-node low-density parity-check convolutional code (PN-LDPC-CC) encoders and decoders. Based on a recently introduced implementation-aware class of LDPC-CCs, these encoders and decoders take advantage of increased node-parallelization to simultaneously decrease the energy-per-bit and increase the decoded information throughput. A series of progressively improved encoder and decoder designs are presented and characterized using synthesis results with respect to power, area and throughput. The best of the encoder and decoder designs significantly advance the
state-of-the-art in terms of both the energy-per-bit and throughput/area metrics. One of the presented decoders, for an Eb /N0 of 2.5 dB has a bit-error-rate of 106, takes 4.5 mm2 in a CMOS 90-nm process, and achieves an energy-per-decoded-information-bit of 65 pJ and a decoded information throughput of 4.8 Gbits/s. We implement an earlier non-parallel node LDPC-CC encoder, decoder and a channel emulator in silicon. We provide readers, via two sets of tables, the ability to look up our decoder hardware metrics, across four different process technologies, for over 1000 variations of our PN-LDPC-CC decoders. By imposing practical decoder implementation constraints on power or area, which in turn drives trade-offs in code size versus the number of decoder processors, we compare the code BER performance. An extensive comparison to known LDPC-BC/CC decoder implementations is provided.
|
133 |
Performance Analysis of Dispersed Spectrum Cognitive Radio SystemsMohammad, Muneer 2009 December 1900 (has links)
Dispersed spectrum cognitive radio systems represent a promising approach to exploit the utilization of spectral resources to full extent. Therefore, the performance analysis of such systems is conducted in this research. The Average symbol error probability of dispersed spectrum cognitive radio systems is derived for two cases: where each channel realization experiences independent and dependent Nakagami-m fading, respectively. In addition, the derivation is extended to include the effects of modulation type and order by considering M-PSK and M-QAM modulation schemes. We then study the impacts of topology on the effective transport capacity performance of ad hoc dispersed spectrum cognitive radio systems where the nodes assume 3- dimensional (3D) configurations. We derive the effective transport capacity considering a cubic grid distribution. In addition, numerical results are presented to demonstrate the effects of topology on the effective transport capacity of ad hoc dispersed cognitive radio systems.
|
134 |
A Viterbi Decoder Using System C For Area Efficient Vlsi ImplementationSozen, Serkan 01 September 2006 (has links) (PDF)
In this thesis, the VLSI implementation of Viterbi decoder using a design and simulation platform called SystemC is studied. For this purpose, the architecture of Viterbi decoder is tried to be optimized for VLSI implementations. Consequently, two novel area efficient structures for reconfigurable Viterbi decoders have been suggested.
The traditional and SystemC design cycles are compared to show the advantages of SystemC, and the C++ platforms supporting SystemC are listed, installation issues and examples are discussed.
The Viterbi decoder is widely used to estimate the message encoded by Convolutional encoder. For the implementations in the literature, it can be found that special structures called trellis have been formed to decrease the complexity and the area.
In this thesis, two new area efficient reconfigurable Viterbi decoder approaches are suggested depending on the rearrangement of the states of the trellis structures to eliminate the switching and memory addressing complexity.
The first suggested architecture based on reconfigurable Viterbi decoder reduces switching and memory addressing complexity. In the architectures, the states are reorganized and the trellis structures are realized by the usage of the same structures in subsequent instances. As the result, the area is minimized and power consumption is reduced. Since the addressing complexity is reduced, the speed is expected to increase.
The second area efficient Viterbi decoder is an improved version of the first one and has the ability to configure the parameters of constraint length, code rate, transition probabilities, trace-back depth and generator polynomials.
|
135 |
Trellis Coded Multi-h CPFSK via Matched CodesHsieh, Jeng-Shien 19 July 2000 (has links)
The continuous phase frequency shift keying (CPFSK) is a modulation method with memory. The memory results from the phase continuity of the transmitted carrier phase from one signal interval to the next. For a specific form of phase, CPFSK becomes a special case of a general class of continuous phase modulation (CPM) signals. In this thesis, we extend the decomposition model of single-h CPM to the multi-h CPM decomposition model. Based on this decomposition model approach the multi-h CPFSK schemes are evaluated by searching the desired multi-h phase codes at a given number of states.
Moreover, the trellis coded multi-h CPFSK schemes, which are the combination of the (binary) convolutional codes with the multi-h CPFSK schemes, are searching by optimization procedure via the matched encoding method. To further improve the performance, in terms of the coding gain, the ring convolutional codes are applied to the continuous phase encoder (CPE) of the proposed multi-h CPFSK schemes. Due to the fact that the code structure of the ring convolutional codes is similar to the CPE, this will result in having simple and efficient combination of the convolutional codes with the multi-h CPFSK signaling schemes.
|
136 |
Towards higher speed decoding of convolutional turbocodesSANCHEZ GONZALEZ, Oscar David 15 March 2013 (has links) (PDF)
The turbo codes are a well known channel coding technique widely used because of their outstanding error decoding performance close to the Shannon limit. These codes were proposed using a clever pragmatic approach where a set of concepts that had been previously introduced, together with the iterative processing of data, are successfully combined to obtain close to optimal decoding performance capabilities. However, precisely because this iterative processing, high latency values appear and the achievable decoder throughput is limited. At the beginning of our research activities, the fastest turbo decoder architecture introduced in the literature achieved a throughput peak value around 700 Mbit/s. There were also several works that proposed architectures capable of achieving throughput values around 100 Mbit/s. Research opportunities were then available in order to establish architectural solutions that enable the decoding at a few Gbit/s, so that the industrial requirements are fulfilled and future high performance digital communication systems can be conceived. The first part of this work is devoted to the study of the turbo codes at an algorithmic level. Several SISO decoder algorithms are explored, and different parallel turbo decoder techniques are analyzed. The convergence of parallel turbo decoder is specially considered. To this end the EXtrinsic Information Transfer (EXIT) charts are used. Conclusions derived from these kind of diagrams have served to propose a novel SISO decoder schedule to be used in shuffled turbo decoder architectures. The architectural issues when implementing high parallel turbo decoder are considered in the second part of this thesis. We propose a high throughput low complexity radix-16 SISO decoder. This decoder is intended to break the bottleneck that appears because of the recursive operations in the heart of the turbo decoding algorithm. The design of this architecture was possible thanks to the elimination of parallel paths in a radix-16 trellis diagram transition. The proposed SISO decoder implements a high speed radix-8 Add Compare Select (ACS) unit which exhibits a lower hardware complexity and lower critical path compared with a radix-16 ACS unit. Our radix-16 SISO decoder degrades the turbo decoder error correcting performance. Therefore, we have proposed two techniques so that the architecture can be used in practical applications. Thus, architectural solutions to build high parallel turbo decoder architectures, which integrate our SISO decoder, are presented. Finally, a methodology to efficiently explore the design space of parallel turbo decoder architectures is described. The main objective of this approach is to reduce the time to market constraint by designing turbo decoder architectures for a given throughput.
|
137 |
Parallel-Node Low-Density Parity-Check Convolutional Code Encoder and Decoder ArchitecturesBrandon, Tyler Unknown Date
No description available.
|
138 |
Space-time Coded Modulation Design in Slow FadingElkhazin, Akrum 08 March 2010 (has links)
This dissertation examines multi-antenna transceiver design over flat-fading wireless channels. Bit Interleaved Coded Modulation
(BICM) and MultiLevel Coded Modulation (MLCM) transmitter structures are considered, as well as the used of an optional spatial precoder under slow and quasi-static fading conditions. At the receiver, MultiStage Decoder (MSD) and Iterative Detection and Decoding (IDD) strategies are applied. Precoder, mapper and
subcode designs are optimized for different receiver structures over the different antenna and fading scenarios.
Under slow and quasi-static channel conditions, fade resistant multi-antenna transmission is achieved through a combination of linear spatial precoding and non-linear multi-dimensional mapping. A time-varying random unitary precoder is proposed, with significant performance gains over spatial interleaving. The fade resistant properties of multidimensional random mapping are also analyzed. For MLCM architectures, a group random labelling
strategy is proposed for large antenna systems.
The use of complexity constrained receivers in BICM and MLCM transmissions is explored. Two multi-antenna detectors are proposed based on a group detection strategy, whose complexity can be adjusted through the group size parameter. These detectors show
performance gains over the the Minimum Mean Squared Error (MMSE)detector in spatially multiplexed systems having an excess number
of transmitter antennas.
A class of irregular convolutional codes is proposed for use in BICM transmissions. An irregular convolutional code is formed by
encoding fractions of bits with different puncture patterns and mother codes of different memory. The code profile is designed with the aid of extrinsic information transfer charts, based on
the channel and mapping function characteristics. In multi-antenna
applications, these codes outperform convolutional turbo codes under independent and quasi-static fading conditions.
For finite length transmissions, MLCM-MSD performance is affected by the mapping function. Labelling schemes such as set
partitioning and multidimensional random labelling generate a large spread of subcode rates. A class of generalized Low Density
Parity Check (LDPC) codes is proposed, to improve low-rate subcode performance. For MLCM-MSD transmissions, the proposed generalized LDPC codes outperform conventional LDPC code construction over a
wide range of channels and design rates.
|
139 |
Επεξεργαστές VLSI για διόρθωση λαθών με συνελικτικούς κώδικεςΚαζίλης, Φάνης 21 March 2012 (has links)
Σκοπός της παρούσας διπλωματικής εργασίας είναι η μελέτη και ο σχεδιασμός VLSI επεξεργαστών για τη διόρθωση λαθών. Η κατηγορία των VLSI επεξεργαστών στην οποία εστιάζει η έρευνά μου είναι ο αποκωδικοποιητής Viterbi.
Αρχικά, παρουσιάζεται η δομή του ψηφιακού τηλεπικοινωνιακού συστήματος και κάποιες βασικές έννοιες των κωδικών διόρθωσης λαθών. Έπειτα, αναλύονται οι Συνελικτικοί κωδικοποιητές, ανάμεσα στους οποίους περιλαμβάνεται ο Συνελικτικός κωδικοποιητής που χρησιμοποιείται στην εργασία μου και ο οποίος χρησιμοποιείται ευρέως στο πρότυπο Wifi 802.11a. Ακολούθως, γίνεται αναφορά στο κανάλι AWGN και στη διαμόρφωση BPSK. Ακόμα, παρουσιάζονται οι βασικές έννοιες του αλγόριθμου Viterbi, η λειτουργία του, η δομή του καθώς και οι εφαρμογές του.
Στη συνέχεια, μελετώνται διάφορες αρχιτεκτονικές του αποκωδικοποιητή Viterbi σε VLSI. Με βάση τον τρόπο υλοποίησης αριθμητικών πράξεων, οι αρχιτεκτονικές που αναπτύσσονται είναι ο Radix-2 και ο Radix-4 Viterbi, ενώ με βάση τον τρόπο αποκωδικοποίησης αναπτύσσονται οι αρχιτεκτονικές του Viterbi για συνεχή αποκωδικοποίηση-εφαρμογές streaming και του Viterbi για αποκωδικοποίηση πακέτων των 20 bits. Επίσης, μελετάται η απόδοση των αρχιτεκτονικών αυτών με κριτήριο τη συχνότητα λαθών που πραγματοποιούνται (Bit Error Rate – BER) και αναλύεται η υλοποίηση των αρχιτεκτονικών αυτών στο αναπτυξιακό σύστημα Xilinx.
Τέλος, προκύπτουν τα κατάλληλα συμπεράσματα. / The purpose of this diploma thesis is to study and implement VLSI processors for correcting errors. The category of VLSI processor which will focus in this work is the Viterbi decoder.
Initially, the structure of the digital telecommunications system is presented along with some basic concepts of error correcting codes. Then we explain the theory behind convolutional encoders and we describe the convolutional encoder that is used in my work and is consistent in the Wifi 802.11a standard. Next we analyze briefly the AWGN channel and the BPSJ modulation. Also the basic concepts of the Viterbi algorithm, how it works, its structure and the different applications are given.
For the practical part which is the main part of this project, is to study the different architectures of the Viterbi decoder in VLSI approach. The main architectures that were developed for the implementation arithmetic operations is Radix-2 and Radix-4 Viterbi, but in terms of decoding two more architectures were developed, Viterbi continuous decoding-streaming applications and Viterbi decoding for packets of 20 bits. Then, the performance of these architectures in terms of frequency of errors made (BER) was investigated and also the implementation of these architectures in the development system Xilinx was analyzed.
At the end we give our conclusion regarding the results of the different simulations that we’ve done.
|
140 |
Improving Photogrammetry using Semantic SegmentationKernell, Björn January 2018 (has links)
3D reconstruction is the process of constructing a three-dimensional model from images. It contains multiple steps where each step can induce errors. When doing 3D reconstruction of outdoor scenes, there are some types of scene content that regularly cause problems and affect the resulting 3D model. Two of these are water, due to its fluctuating nature, and sky because of it containing no useful (3D) data. These areas cause different problems throughout the process and do generally not benefit it in any way. Therefore, masking them early in the reconstruction chain could be a useful step in an outdoor scene reconstruction pipeline. Manual masking of images is a time-consuming and boring task and it gets very tedious for big data sets which are often used in large scale 3D reconstructions. This master thesis explores if this can be done automatically using Convolutional Neural Networks for semantic segmentation, and to what degree the masking would benefit a 3D reconstruction pipeline. / 3D-rekonstruktion är teknologin bakom att skapa 3D-modeller utifrån bilder. Det är en process med många steg där varje steg kan medföra fel. Vid 3D-rekonstruktion av stora utomhusmiljöer finns det vissa typer av bildinnehåll som ofta ställer till problem. Två av dessa är vatten och himmel. Vatten är problematiskt då det kan fluktuera mycket från bild till bild samt att det kan innehålla reflektioner som ger olika utseenden från olika vinklar. Himmel å andra sidan ska aldrig ge upphov till 3D-information varför den lika gärna kan maskas bort. Manuell maskning av bilder är väldigt tidskrävande och dyrt. Detta examensarbete undersöker huruvida denna maskning kan göras automatiskt med Faltningsnät för Semantisk Segmentering och hur detta skulle kunna förbättra en 3D-rekonstruktionsprocess.
|
Page generated in 0.0215 seconds