• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 262
  • 98
  • 48
  • 29
  • 21
  • 11
  • 9
  • 6
  • 3
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 555
  • 101
  • 93
  • 88
  • 79
  • 64
  • 64
  • 63
  • 63
  • 57
  • 49
  • 48
  • 45
  • 42
  • 40
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
201

Joint Design of Precoders and Decoders for CDMA Multiuser Cooperative Networks

Liu, Jun-tin 07 September 2011 (has links)
In this paper, we consider the code division multiple access of the multiuser cooperative network system, all sources transmit signals using assigned spreading waveforms in first phase, and all relays transmit precoded signals using a common spreading waveform to help send signals to all destinations in second phase, in order to improve the performance. In this paper, we proposed the precoding strategy of relay point and the decoding strategy of destination point; at first we use the zero-forcing to eliminate the multi-user interferen- ce at the destination, and then joint design of the precoding vector at relay point and the decoding vector at destination point to achieve different optimization objectives. In this paper, we consider the power constraints to optimal the average SNR for the precoding vector and decoding vector, but the precoding vector favors the source-destination pairs with better channel quality in this condition, we also present the design of fairness, joint design of the precoding vector and the decoding vector to make the worst SNR can have the best signal-to-noise ratio after the design, and also consider the power constrain.
202

Hur tolkar du? : En studie om reklambilder utifrån sändar- och mottagarperspektiv. / How do you interpret? : A study of advertising images based on the transmitter and recipient perspective.

Yusuf, Farah, Jaykar, Ida, Emilsson, Isabelle January 2008 (has links)
<p> </p><p> </p><p><p>The study aims to gain a better understanding and explore how a selected group (receivers) perceive two elected advertising images from Indiska and Vila and then compare their opinions to what Indiska and Vila themselves want to communicate. We base the study on theories of encoding / decoding, which deals with how the companies charge their advertising images with values and how recipients decodes these values.</p><p>A qualitative study was carried out based on analytical induction (planning, collection and analysis). Through the analytical method of induction, we created categories based on the collected data and put them in relation to each other. The result showed that the overall impression of the images' and the context are of great importance for how our respondents perceive the advertising images. The overall impressions convey emotion that reinforces the expression and the message. The respondents partially perceived the transmitters' message in the pictures, but some elements did not match the transmitters' intention.</p><p>We also made a reception analysis to find out whether the nursery place of our respondents made a difference in how they interpreted the advertising images. Although we could not reveal any significant difference between these groups, we still believe that social, cultural and economic background matters when it comes to interpretation.</p></p>
203

Ämnesövergripande undervisning i läsförståelse : Mellanstadielärares kompetens och undervisningsstrategier i olika ämnen / Interdisciplinary teaching in reading comprehension : Teachers’ qualifications and teaching strategies in different subjects

Johansson, Sofia January 2015 (has links)
In this study, six teachers have been interviewed about their vision and teaching of reading comprehension, both for pupils who has cleared the reading code and those who have not. The aim is to illustrate if teachers in middle school spend time to exercise reading comprehension, or if this is left to the Swedish teachers. Thus only according to the subject Swedish, the students are entitled to be given the opportunity to develop reading strategies. The interviews are semi-structured based on qualitative research. The informants are three teachers of Swedish and three teachers of other subjects. Two different interview guides were used containing three questions. The main questions were the same but each guide had some question directly connected to the subject. The results show that all teachers believe that exercising reading comprehension is to be conducted in all subjects, not just Swedish. However, the work is done differently. Teachers in the Swedish subject discuss their teaching in a much more purposeful way than the other teachers. Teachers in the Swedish subject have developed their competence concerning reading comprehension and have got more knowledge than those on other subjects. Those teachers who do not teach Swedish as a subject say that lack of time is the reason why reading comprehension cannot be integrated to the extent that they desire / I den här studien har sex verksamma lärare intervjuats angående deras syn på undervisning av läsförståelse, både när det gäller elever som knäckt läskoden respektive de som inte har det. Syftet är att åskådliggöra om samtliga lärare på mellanstadiet lägger tid på läsförståelseträningen, eller om det är lämnat åt svensklärarna, då det enbart står i ämnet svenska att eleverna ska ges möjlighet att utveckla lässtrategier. Intervjuerna är semi-strukturerade och bygger på en kvalitativ studie. Informanterna är tre lärare i svenska och tre lärare i andra ämnen Två olika intervjuguider användes som innehöll tre frågor, huvudfrågorna användes till samtliga lärare medan någon fråga var direkt riktad till de ämnen lärarna undervisar inom. Resultatet visar att samtliga lärare är eniga om att läsförståelseträning ska bedrivas i alla ämnen och inte bara svenska. Däremot skiljer sig båda kategorierna åt då svensklärarna diskuterar sin undervisning på ett mycket mer målmedvetet sätt än de övriga lärarna
204

Extracting Spatiotemporal Word and Semantic Representations from Multiscale Neurophysiological Recordings in Humans

Chan, Alexander Mark 21 June 2014 (has links)
With the recent advent of neuroimaging techniques, the majority of the research studying the neural basis of language processing has focused on the localization of various lexical and semantic functions. Unfortunately, the limited time resolution of functional neuroimaging prevents a detailed analysis of the dynamics involved in word recognition, and the hemodynamic basis of these techniques prevents the study of the underlying neurophysiology. Compounding this problem, current techniques for the analysis of high-dimensional neural data are mainly sensitive to large effects in a small area, preventing a thorough study of the distributed processing involved for representing semantic knowledge. This thesis demonstrates the use of multivariate machine-learning techniques for the study of the neural representation of semantic and speech information in electro/magneto-physiological recordings with high temporal resolution. Support vector machines (SVMs) allow for the decoding of semantic category and word-specific information from non-invasive electroencephalography (EEG) and magnetoenecephalography (MEG) and demonstrate the consistent, but spatially and temporally distributed nature of such information. Moreover, the anteroventral temporal lobe (avTL) may be important for coordinating these distributed representations, as supported by the presence of supramodal category-specific information in intracranial recordings from the avTL as early as 150ms after auditory or visual word presentation. Finally, to study the inputs to this lexico-semantic system, recordings from a high density microelectrode array in anterior superior temporal gyrus (aSTG) are obtained, and the recorded spiking activity demonstrates the presence of single neurons that respond specifically to speech sounds. The successful decoding of word identity from this firing rate information suggests that the aSTG may be involved in the population coding of acousto-phonetic speech information that is likely on the pathway for mapping speech-sounds to meaning in the avTL. The feasibility of extracting semantic and phonological information from multichannel neural recordings using machine learning techniques provides a powerful method for studying language using large datasets and has potential implications for the development of fast and intuitive communication prostheses. / Engineering and Applied Sciences
205

High-dimensional classification for brain decoding

Croteau, Nicole Samantha 26 August 2015 (has links)
Brain decoding involves the determination of a subject’s cognitive state or an associated stimulus from functional neuroimaging data measuring brain activity. In this setting the cognitive state is typically characterized by an element of a finite set, and the neuroimaging data comprise voluminous amounts of spatiotemporal data measuring some aspect of the neural signal. The associated statistical problem is one of classification from high-dimensional data. We explore the use of functional principal component analysis, mutual information networks, and persistent homology for examining the data through exploratory analysis and for constructing features characterizing the neural signal for brain decoding. We review each approach from this perspective, and we incorporate the features into a classifier based on symmetric multinomial logistic regression with elastic net regularization. The approaches are illustrated in an application where the task is to infer from brain activity measured with magnetoencephalography (MEG) the type of video stimulus shown to a subject. / Graduate
206

Αρχιτεκτονικές VLSI για την αποκωδικοποίηση κωδικών LDPC με εφαρμογή σε ασύρματες ψηφιακές επικοινωνίες / VLSI architectures for LDPC code decoding with application in wireless digital communications

Γλυκιώτης, Γιάννης 16 May 2007 (has links)
Η διπλωματική εργασία επικεντρώνεται στην αποκωδικοποίηση με τη χρήση LDPC κωδικών. Στα πλαίσιά της, θα μελετηθεί και θα αξιολογηθεί η κωδικοποίηση και η αποκωδικοποίηση LDPC, με συνδυασμένα κριτήρια παρεχόμενης ποιότητας (κριτήρια BER σε διάφορες συνθήκες μετάδοσης) και πολυπλοκότητας υλοποίησης σε υλικό. Μέσω εξομοίωσης, θα εξεταστεί κατά πόσο επηρεάζεται η απόδοση των αποκωδικοποιητών από την αναπαράσταση πεπερασμένου μήκους λέξης, η οποία χρησιμοποιείται για την υλοποίηση της αρχιτεκτονικής τους σε υλικό. Αφού αποφασιστεί το μήκος λέξης, ώστε η απόδοση του αποκωδικοποιητή να προσσεγγίζει τη θεωρητική, θα ακολουθήσει η μελέτη και ο σχεδιασμός της αρχιτεκτονικής του αποκωδικοποιητή, ώστε να ικανοποιεί και άλλα πρακτικά κριτήρια, με έμφαση στην χαμηλή κατανάλωση ενέργειας. Η καινοτομία της διπλωματικής έγκειται στην παρουσίαση ενός νέου κριτηρίου για τον τερματισμό των επαναλήψεων σε αποκωδικοποιητές LDPC. Το προτεινόμενο κριτήριο είναι κατάλληλο για υλοποίηση σε υλικό, και όπως προκύπτει τελικά, μπορεί να αποφέρει σημαντική μείωση στην κατανάλωση ενέργειας των αποκωδικοποιητών. Το κριτήριο ελέγχει αν υπάρχουν «κύκλοι» στην ακολουθία των soft words κατά την αποκωδικοποίηση. Οι «κύκλοι» αυτοί προκύπτουν σε κάποιες περιπτώσεις χαμηλού λόγου σήματος προς θόρυβο, όπου ο αποκωδικοποιητής δε μπορεί να καταλήξει σε αποτέλεσμα, κάτι το οποίο οδηγεί σε ανόφελη κατανάλωση ενέργειας, αφού δε βελτιώνεται το bit error rate, ενώ ο αποκωδικοποιητής συνεχίζει να λειτουργεί. Η προτεινόμενη αρχιτεκτονική τερματίζει τη διαδικασία της αποκωδικοποίησης σε περίπτωση που υπάρχει «κύκλος», επιτρέποντας σημαντική μείωση της κατανάλωσης ενέργειας, η οποία συνοδεύεται από πολύ μικρή μείωση στην απόδοση του αποκωδικοποιητή. Το προτεινόμενο κριτήριο μπορεί να εφαρμοστεί σε οποιαδήποτε υπάρχουσα αρχιτεκτονική για LDPC αποκωδικοποιητές. Συγκεκριμένα, στη διπλωματική αυτή, μελετώνται τα αποτελέσματα της εφαρμογής του κριτηρίου στις Hardware-Sharing και Parallel αρχιτεκτονικές. / This thesis introduces a novel criterion for the termination of iterations in iterative LDPC Code decoders. The proposed criterion is amenable for VLSI implementation, and it is here shown that it can enhance previously reported LDPC Code decoder architectures substantially, by reducing the corresponding power dissipation. The concept of the proposed criterion is the detection of cycles in the sequences of soft words. The soft-word cycles occur in some cases of low signal-to-noise ratios and indicate that the decoder is unable to decide on a codeword, which in turn results in unnecessary power consumption due to iterations that do not improve the bit error rate. The proposed architecture terminates the decoding process when a soft-word occurs, allowing for substantial power savings at a minimal performance penalty. The proposed criterion is applied to Hardware-Sharing and Parallel Decoder architectures.
207

Iterative Decoding Beyond Belief Propagation of Low-Density Parity-Check Codes

Planjery, Shiva Kumar January 2013 (has links)
The recent renaissance of one particular class of error-correcting codes called low-density parity-check (LDPC) codes has revolutionized the area of communications leading to the so-called field of modern coding theory. At the heart of this theory lies the fact that LDPC codes can be efficiently decoded by an iterative inference algorithm known as belief propagation (BP) which operates on a graphical model of a code. With BP decoding, LDPC codes are able to achieve an exceptionally good error-rate performance as they can asymptotically approach Shannon's capacity. However, LDPC codes under BP decoding suffer from the error floor phenomenon, an abrupt degradation in the error-rate performance of the code in the high signal-to-noise ratio region, which prevents the decoder from achieving very low error-rates. It arises mainly due to the sub-optimality of BP decoding on finite-length loopy graphs. Moreover, the effects of finite precision that stem from hardware realizations of BP decoding can further worsen the error floor phenomenon. Over the past few years, the error floor problem has emerged as one of the most important problems in coding theory with applications now requiring very low error rates and faster processing speeds. Further, addressing the error floor problem while taking finite precision into account in the decoder design has remained a challenge. In this dissertation, we introduce a new paradigm for finite precision iterative decoding of LDPC codes over the binary symmetric channel (BSC). These novel decoders, referred to as finite alphabet iterative decoders (FAIDs), are capable of surpassing the BP in the error floor region at a much lower complexity and memory usage than BP without any compromise in decoding latency. The messages propagated by FAIDs are not quantized probabilities or log-likelihoods, and the variable node update functions do not mimic the BP decoder. Rather, the update functions are simple maps designed to ensure a higher guaranteed error correction capability which improves the error floor performance. We provide a methodology for the design of FAIDs on column-weight-three codes. Using this methodology, we design 3-bit precision FAIDs that can surpass the BP (floating-point) in the error floor region on several column-weight-three codes of practical interest. While the proposed FAIDs are able to outperform the BP decoder with low precision, the analysis of FAIDs still proves to be a difficult issue. Furthermore, their achievable guaranteed error correction capability is still far from what is achievable by the optimal maximum-likelihood (ML) decoding. In order to address these two issues, we propose another novel class of decoders called decimation-enhanced FAIDs for LDPC codes. For this class of decoders, the technique of decimation is incorporated into the variable node update function of FAIDs. Decimation, which involves fixing certain bits of the code to a particular value during decoding, can significantly reduce the number of iterations required to correct a fixed number of errors while maintaining the good performance of a FAID, thereby making such decoders more amenable to analysis. We illustrate this for 3-bit precision FAIDs on column-weight-three codes and provide insights into the analysis of such decoders. We also show how decimation can be used adaptively to further enhance the guaranteed error correction capability of FAIDs that are already good on a given code. The new adaptive decimation scheme proposed has marginally added complexity but can significantly increase the slope of the error floor in the error-rate performance of a particular FAID. On certain high-rate column-weight-three codes of practical interest, we show that adaptive decimation-enhanced FAIDs can achieve a guaranteed error-correction capability that is close to the theoretical limit achieved by ML decoding.
208

Detection and Decoding for Magnetic Storage Systems

Radhakrishnan, Rathnakumar January 2009 (has links)
The hard-disk storage industry is at a critical time as the current technologies are incapable of achieving densities beyond 500 Gb/in2, which will be reached in a few years. Many radically new storage architectures have been proposed, which along with advanced signal processing algorithms are expected to achieve much higher densities. In this dissertation, various signal processing algorithms are developed to improve the performance of current and next-generation magnetic storage systems.Low-density parity-check (LDPC) error correction codes are known to provide excellent performance in magnetic storage systems and are likely to replace or supplement currently used algebraic codes. Two methods are described to improve their performance in such systems. In the first method, the detector is modified to incorporate auxiliary LDPC parity checks. Using graph theoretical algorithms, a method to incorporate maximum number of such checks for a given complexity is provided. In the second method, a joint detection and decoding algorithm is developed that, unlike all other schemes, operates on the non-binary channel output symbols rather than input bits. Though sub-optimal, it is shown to provide the best known decoding performance for channels with memory more than 1, which are practically the most important.This dissertation also proposes a ternary magnetic recording system from a signal processing perspective. The advantage of this novel scheme is that it is capable of making magnetic transitions with two different but predetermined gradients. By developing optimal signal processing components like receivers, equalizers and detectors for this channel, the equivalence of this system to a two-track/two-head system is determined and its performance is analyzed. Consequently, it is shown that it is preferable to store information using this system, than to store using a binary system with inter-track interference. Finally, this dissertation provides a number of insights into the unique characteristics of heat-assisted magnetic recording (HAMR) and two-dimensional magnetic recording (TDMR) channels. For HAMR channels, the effects of laser spot on transition characteristics and non-linear transition shift are investigated. For TDMR channels, a suitable channel model is developed to investigate the two-dimensional nature of the noise.
209

Downlink W-CDMA performance analysis and receiver implmentation on SC140 Motorola DSP

Ghosh, Kaushik 30 September 2004 (has links)
High data rate applications are the trend in today's wireless technology. W-CDMA standard was designed to support such high data rates of up to 3.84 Mcps. The main purpose of this research was to analyze the feasibility of a fixed-point implementation of the W-CDMA downlink receiver algorithm on a general-purpose digital signal processor (StarCore SC140 by Motorola). The very large instruction word architecture of SC140 core is utilized to generate optimal implementation, to meet the real time timing requirements of the algorithm. The other main aim of this work was to study and evaluate the performance of the W-CDMA downlink structure with incorporated space-time transmit diversity. The effect of the channel estimation algorithm used was extensively studied too.
210

GI-metoden – bluff eller vägen till ett hälsosamt liv? : En studie om hur medier marknadsför hälsobegreppet Glykemiskt index

Westlund, Annika January 2008 (has links)
Abstract Title: The GI- method –bluff or the path to healthy life? Number of pages: 41 Tutor: Lowe Hedman Author: Annika Westlund Course: Media and Communication Studies C Period: Fall 2006 University: Division of Media and Communications Studies C Purpose/Aim: The aim of this essay was to investigate how the site GI viktkoll describes glycemic index on their website. The intention was also to investigate how the media presented GI through articles and how they used doctors and dieticians to appear trustworthy. Another aim was to investigate what effect GI viktkoll could have on its readers. Method/Material: I have chosen a qualitative method where I did a discourse analysis of the articles which were presented on the GI viktkolls website during a period of three weeks. This was my main method in the essay. I also did two interviews with educated professionals. The articles on the website were thereby the main material I used in the essay. Main results: My result shows that GI viktkoll do have an underlying aim in wanting to influence its readers ina specific way. Therefore my result shows that it is important as a reader, to be aware of that GI viktkoll might not present a critical way of thinking and every aspect of the phenomenon. GI viktkoll also has influence on people because they have power to change peoples mind about the phenomenon GImethod in the society. GI viktkoll only presents the healthy way of living through the GI method, although there still there is a lot of disagreement from other directions such as doctors and dieticians about the actual effects of the GI method on healthy people. Keywords: glycemic index, media culture, encoding/decoding, discourse analysis.

Page generated in 0.078 seconds