• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 80
  • 21
  • 19
  • 19
  • 16
  • 7
  • 4
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 190
  • 40
  • 38
  • 36
  • 31
  • 28
  • 27
  • 26
  • 25
  • 24
  • 24
  • 22
  • 22
  • 21
  • 21
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

A Temporal Encoder-Decoder Approach to Extracting Blood Volume Pulse Signal Morphology from Face Videos

Li, Fulan 05 July 2023 (has links)
This thesis considers methods for extracting blood volume pulse (BVP) representations from video of the human face. Whereas most previous systems have been concerned with estimating vital signs such as average heart rate, this thesis addresses the more difficult problem of recovering BVP signal morphology. We present a new approach that is inspired by temporal encoder-decoder architectures that have been used for audio signal separation. As input, this system accepts a temporal sequence of RGB (red, green, blue) values that have been spatially averaged over a small portion of the face. The output of the system is a temporal sequence that approximates a BVP signal. In order to reduce noise in the recovered signal, a separate processing step extracts individual pulses and performs normalization and outlier removal. After these steps, individual pulse shapes have been extracted that are sufficiently distinct to support biometric authentication. Our findings demonstrate the effectiveness of our approach in extracting BVP signal morphology from facial videos, which presents exciting opportunities for further research in this area. The source code is available at https://github.com/Adleof/CVPM-2023-Temporal-Encoder-Decoder-iPPG / Master of Science / This thesis considers methods for extracting blood volume pulse (BVP) representations from video of the human face. We present a new approach that is inspired by the method that has been used for audio signal separation. The output of our system is an approximation of the BVP signal of the person in the video. Our method can extract a signal that is sufficiently distinct to support biometric authentication. Our findings demonstrate the effectiveness of our approach in extracting BVP signal morphology from facial videos, which presents exciting opportunities for further research in this area.
42

Low-Power Wireless Transceiver for Deeply Implanted Biomedical Devices

Majerus, Steve J.A. 04 June 2008 (has links)
No description available.
43

Popis a reprezentace dvourozměrných zvukových scén ve vícekanálových systémech reprodukce zvuku / 2D Audio Scene Analysis and Rendering in Multichannel Sound-Reproduction Systems

Trzos, Michal January 2009 (has links)
This thesis deals with cues used by the human auditory system to identify the location of sound and methods for sound localisation based these cues, namely, vector based amplitude panning and ambisonics, which are described in detail. These methods have been implemented as a VST plug-in module. This thesis also contains listening tests of second order ambisonics along with acquired data analysis.
44

Turbo Decoding With Early State Decisions

Lindblom, Johannes January 2008 (has links)
<p>Turbo codes was first presented in 1993 by C. Berrou, A. Glavieux and P. Thitimajshima. Since then this class of error correcting codes has become one of the most popular, because of its good properties. The turbo codes are able to come very close to theoretical limit, the Shannon limit. Turbo codes are for example used in the third generation of mobile phone (3G) and in the standard IEEE 802.16 (WiMAX).</p><p>There are some drawbacks with the algorithm for decoding turbo codes. The deocoder uses a Maximum A Posteriori (MAP) algorithm, which is a complex algorith. Because of the use of many variables in the decoder the decoding circuit will consume a lot of power due to memory accesses and internal communication. One way in which this can be reduced is to make early decisions.</p><p>In this work I have focused on making early decision of the encoder states. One major part of the work was also to be sure that the expressions were written in a way that as few variables as possible are needed. A termination condition is also introduced. Simulations based on estimations of the number of memory accesses, shows that the number of memory accesses will significantly decrease.</p>
45

Turbo Decoding With Early State Decisions

Lindblom, Johannes January 2008 (has links)
Turbo codes was first presented in 1993 by C. Berrou, A. Glavieux and P. Thitimajshima. Since then this class of error correcting codes has become one of the most popular, because of its good properties. The turbo codes are able to come very close to theoretical limit, the Shannon limit. Turbo codes are for example used in the third generation of mobile phone (3G) and in the standard IEEE 802.16 (WiMAX). There are some drawbacks with the algorithm for decoding turbo codes. The deocoder uses a Maximum A Posteriori (MAP) algorithm, which is a complex algorith. Because of the use of many variables in the decoder the decoding circuit will consume a lot of power due to memory accesses and internal communication. One way in which this can be reduced is to make early decisions. In this work I have focused on making early decision of the encoder states. One major part of the work was also to be sure that the expressions were written in a way that as few variables as possible are needed. A termination condition is also introduced. Simulations based on estimations of the number of memory accesses, shows that the number of memory accesses will significantly decrease.
46

Encoder-Decoder Networks for Cloud Resource Consumption Forecasting

Mejdi, Sami January 2020 (has links)
Excessive resource allocation in telecommunications networks can be prevented by forecasting the resource demand when dimensioning the networks and then allocating the necessary resources accordingly, which is an ongoing effort to achieve a more sustainable development. In this work, traffic data from cloud environments that host deployed virtualized network functions (VNFs) of an IP Multimedia Subsystem (IMS) has been collected along with the computational resource consumption of the VNFs. A supervised learning approach was adopted to address the forecasting problem by considering encoder-decoder networks. These networks were applied to forecast future resource consumption of the VNFs by regarding the problem as a time series forecasting problem, and recasting it as a sequence-to-sequence (seq2seq) problem. Different encoder-decoder network architectures were then utilized to forecast the resource consumption. The encoder-decoder networks were compared against a widely deployed classical time series forecasting model that served as a baseline model. The results show that while the considered encoder-decoder models failed to outperform the baseline model in overall Root Mean Squared Error (RMSE) and Mean Absolute Error (MAE), the forecasting capabilities were more resilient to degradation over time. This suggests that the encoder-decoder networks are more appropriate for long-term forecasting, which is in agreement with related literature. Furthermore, the encoder-decoder models achieved competitive performance when compared to the baseline, despite being treated with limited hyperparameter-tuning and the absence of more sophisticated functionality such as attention. This work has shown that there is indeed potential for deep learning applications in forecasting of cloud resource consumption. / Överflödig allokering av resurser i telekommunikationsnätverk kan förhindras genom att prognosera resursbehoven vid dimensionering av dessa nätverk. Detta görs i syfte att bidra till en mer hållbar utveckling. Infor  detta  projekt har  trafikdata från molnmiljon som hyser aktiva virtuella komponenter (VNFs) till ett  IP Multimedia Subsystem (IMS) samlats in tillsammans med resursförbrukningen  av dessa komponenter. Detta examensarbete avhandlar hur effektivt övervakad maskininlärning i form av encoder-decoder natverk kan användas för att prognosera resursbehovet hos ovan nämnda VNFs. Encoder-decoder nätverken appliceras genom att betrakta den samlade datan som en tidsserie. Problemet med att förutspå utvecklingen av tidsserien formuleras sedan som ett sequence-to-sequence (seq2seq) problem. I detta arbete användes en samling encoder-decoder nätverk med olika arkitekturer for att prognosera resursförbrukningen och dessa jämfördes med en populär modell hämtad från klassisk tidsserieanalys. Resultaten visar att encoder- decoder nätverken misslyckades med att överträffa den klassiska tidsseriemodellen med avseende på Root Mean Squared Error (RMSE) och Mean Absolute Error (MAE). Dock visade encoder-decoder nätverken en betydlig motståndskraft mot prestandaförfall över tid i jämförelse med den klassiska tidsseriemodellen. Detta indikerar att encoder-decoder nätverk är lämpliga för prognosering över en längre tidshorisont. Utöver detta visade encoder-decoder nätverken en konkurrenskraftig förmåga att förutspå det korrekta resursbehovet, trots en begränsad justering av disponeringsparametrarna och utan mer sofistikerad funktionalitet implementerad som exempelvis attention.
47

Transformer decoder as a method to predict diagnostic trouble codes in heavy commercial vehicles / Transformer decoder som en metod för att förutspå felkoder i tunga fordon

Poljo, Haris January 2021 (has links)
Diagnostic trouble codes (DTC) have traditionally been used by mechanics to figure out what is wrong with a vehicle. A vehicle generates a DTC when a specific condition in the vehicle is met. This condition has been defined by an engineer and represents some fault that has happened. Therefore the intuition is that DTC’s contain useful information about the health of the vehicle. Due to the sequential ordering of DTC’s and the high count of unique values, this modality of data has characteristics that resemble those of natural language. This thesis investigates if an algorithm that has shown to be promising in the field of Natural Language Processing can be applied to sequences of DTC’s. More specifically, the deep learning model called the transformer decoder will be compared to a baseline model called n-gram in terms of how well they estimate a probability distribution of the next DTC condition on previously seen DTC’s. Estimating a probability distribution could then be useful for manufacturers of heavy commercial vehicles such as Scania when creating systems that help them in their mission of ensuring a high uptime of their vehicles. The algorithms were compared by firstly doing a hyperparameter search for both algorithms and then comparing the models using the 5x2 cross-validation paired t-test. Three metrics were evaluated, perplexity, Top- 1 accuracy, and Top-5 accuracy. It was concluded that there was a significant difference in the performance of the two models where the transformer decoder was the better method given the metrics that were used in the evaluation. The transformer decoder had a perplexity of 22.1, Top-1 accuracy of 37.5%, and a Top-5 accuracy of 59.1%. In contrast, the n-gram had a perplexity of 37.6, Top-1 accuracy of 7.5%, and a Top-5 accuracy of 30%. / Felkoder har traditionellt använts av mekaniker för att ta reda på vad som är fel med ett fordon. Ett fordon genererar en felkod när ett visst villkor i fordonet är uppfyllt, detta villkor har definierats av en ingenjör och representerar något fel som har skett. Därför är intuitionen att felkoder innehåller användbar information om fordonets hälsa. På grund av den sekventiella ordningen av felkoder och det höga antalet unika värden, har denna modalitet av data egenskaper som liknar de för naturligt språk. Detta arbete undersöker om en algoritm som har visat sig vara lovande inom språkteknologi kan tillämpas på sekvenser av felkoder. Mer specifikt kommer den djupainlärnings modellen som kallas Transformer Decoder att jämföras med en basmodell som kallas n- gram. Med avseende på hur väl de estimerar en sannolikhetsfördelning av nästa felkod givet tidigare felkoder som har setts. Att uppskatta en sannolikhetsfördelning kan vara användbart för tillverkare av tunga fordon så som Scania, när de skapar system som hjälper dem i deras uppdrag att säkerställa en hög upptid för sina fordon. Algoritmerna jämfördes genom att först göra en hyperparametersökning för båda modellerna och sedan jämföra modellerna med hjälp av 5x2 korsvalidering parat t-test. Tre mätvärden utvärderades, perplexity, Top-1 träffsäkerhet och Top-5 träffsäkerhet. Man drog slutsatsen att det fanns en signifikant skillnad i prestanda för de två modellerna där Transformer Decoder var den bättre metoden givet mätvärdena som användes vid utvärderingen. Transformer Decoder hade en perplexity på 22.1, Top-1 träffsäkerhet på 37,5% och en Top-5 träffsäkerhet på 59,1%. I kontrast, n-gram modellen hade en perplexity på 37.6, Top-1 träffsäkerhet på 7.5% och en Top-5 träffsäkerhet på 30%.
48

Flexible encoder and decoder designs for low-density parity-check codes

Kopparthi, Sunitha January 1900 (has links)
Doctor of Philosophy / Department of Electrical and Computer Engineering / Don M. Gruenbacher / Future technologies such as cognitive radio require flexible and reliable hardware architectures that can be easily configured and adapted to varying coding parameters. The objective of this work is to develop a flexible hardware encoder and decoder for low-density parity-check (LDPC) codes. The design methodologies used for the implementation of a LDPC encoder and decoder are flexible in terms of parity-check matrix, code rate and code length. All these designs are implemented on a programmable chip and tested. Encoder implementations of LDPC codes are optimized for area due to their high complexity. Such designs usually have relatively low data rate. Two new encoder designs are developed that achieve much higher data rates of up to 844 Mbps while requiring more area for implementation. Using structured LDPC codes decreases the encoding complexity and provides design flexibility. The architecture for an encoder is presented that adheres to the structured LDPC codes defined in the IEEE 802.16e standard. A single encoder design is also developed that accommodates different code lengths and code rates and does not require re-synthesis of the design in order to change the encoding parameters. The flexible encoder design for structured LDPC codes is also implemented on a custom chip. The maximum coded data rate of the structured encoder is up to 844 Mbps and for a given code rate its value is independent of the code length. An LDPC decoder is designed and its design methodology is generic. It is applicable to both structured and any randomly generated LDPC codes. The coded data rate of the decoder increases with the increase in the code length. The number of decoding iterations used for the decoding process plays an important role in determining the decoder performance and latency. This design validates the estimated codeword after every iteration and stops the decoding process when the correct codeword is estimated which saves power consumption. For a given parity-check matrix and signal-to-noise ratio, a procedure to find an optimum value of the maximum number of decoding iterations is presented that considers the affects of power, delay, and error performance.
49

Spatial modulation : theory to practice

Younis, Abdelhamid January 2014 (has links)
Spatial modulation (SM) is a transmission technique proposed for multiple–input multiple– output (MIMO) systems, where only one transmit antenna is active at a time, offering an increase in the spectral efficiency equal to the base–two logarithm of the number of transmit antennas. The activation of only one antenna at each time instance enhances the average bit error ratio (ABER) as inter–channel interference (ICI) is avoided, and reduces hardware complexity, algorithmic complexity and power consumption. Thus, SM is an ideal candidate for large scale MIMO (tens and hundreds of antennas). The analytical ABER performance of SM is studied and different frameworks are proposed in other works. However, these frameworks have various limitations. Therefore, a closed–form analytical bound for the ABER performance of SM over correlated and uncorrelated, Rayleigh, Rician and Nakagami–m channels is proposed in this work. Furthermore, in spite of the low–complexity implementation of SM, there is still potential for further reductions, by limiting the number of possible combinations by exploiting the sphere decoder (SD) principle. However, existing SD algorithms do not consider the basic and fundamental principle of SM, that at any given time, only one antenna is active. Therefore, two modified SD algorithms tailored to SM are proposed. It is shown that the proposed sphere decoder algorithms offer an optimal performance, with a significant reduction of the computational complexity. Finally, the logarithmic increase in spectral efficiency offered by SM and the requirement that the number of antennas must be a power of two would require a large number of antennas. To overcome this limitation, two new MIMO modulation systems generalised spatial modulation (GNSM) and variable generalised spatial modulation (VGSM) are proposed, where the same symbol is transmitted simultaneously from more than one transmit antenna at a time. Transmitting the same data symbol from more than one antenna reduces the number of transmit antennas needed and retains the key advantages of SM. In initial development simple channel models can be used, however, as the system develops it should be tested on more realistic channels, which include the interactions between the environment and antennas. Therefore, a full analysis of the ABER performance of SM over urban channel measurements is carried out. The results using the urban measured channels confirm the theoretical work done in the field of SM. Finally, for the first time, the performance of SM is tested in a practical testbed, whereby the SM principle is validated.
50

NEW TELEMETRY HARDWARE FOR THE DEEP SPACE NETWORK TELEMETRY PROCESSOR SYSTEM

Puri, Amit, Ozkan, Siragan, Schaefer, Peter, Anderson, Bob, Williams, Mike 10 1900 (has links)
International Telemetering Conference Proceedings / October 23-26, 2000 / Town & Country Hotel and Conference Center, San Diego, California / This paper describes the new Telemetry Processor Hardware (TPH) that Avtec Systems has developed for the Deep Space Network (DSN) Telemetry Processor (TLP) system. Avtec is providing the Telemetry Processor Hardware to RTLogic! for integration into the Telemetry Processor system. The Deep Space Network (DSN) is an international network of antennas that supports interplanetary spacecraft missions for exploration of the solar system and the universe. The Jet Propulsion Laboratory manages the DSN for NASA. The TLP system provides the capability to acquire, process, decode and distribute deep space probe and Earth orbiter telemetry data. The new TLP systems will be deployed at each of the three deep-space communications facilities placed approximately 120 degrees apart around the world: at Goldstone, California; near Madrid, Spain; and near Canberra, Australia. The Telemetry Processor Hardware (TPH) supports both CCSDS and TDM telemetry data formats. The TPH performs the following processing steps: soft-symbol input selection and measurement; convolutional decoding; routing to external decoders; time tagging; frame synchronization; derandomization; and Reed-Solomon decoding. The TPH consists of a VME Viterbi Decoder/MCD III Interface board (VM-7001) and a PCI-mezzanine Frame Synchronizer/Reed-Solomon Decoder (PMC- 6130-J) board. The new Telemetry Processor Hardware is implemented using the latest Field Programmable Gate Array (FPGA) technology to provide the density and speed to meet the current requirements as well as the flexibility to accommodate processing enhancements in the future.

Page generated in 0.0407 seconds