201 |
Dyskalkyli : Normativa data för svenska barn i årskurs 5 och 6 på Dyscalculia Screener och hur testresultat korrelerar med avkodningsförmåga och skolmatematikSahlberg, Anna, Taavola, Lina-Lotta January 2011 (has links)
Dyskalkyli (specifika räknesvårigheter) är en av flera orsaker till matematiksvårigheter. Studier har påvisat samband mellan dyskalkyli och dyslexi och att personer med dyskalkyli har svårt att klara skolmatematiken. Två skilda synsätt förklarar orsaken till dyskalkyli: systemteorin och modulärteorin. Dyscalculia Screener är ett screeningverktyg som bygger på modulärteorin och att dyskalkyli beror på svårigheter med grundläggande antalsuppfattning och ska urskilja personer med dyskalkyli från de som är dåliga på matematik av andra orsaker. Testet innehåller delar som testar reaktionstid (Simple Reaction Time), antalsuppfattning (Dot Enumeration och Numerical Stroop) och aritmetik (Addition och Multiplication). Denna studie undersökte hur svenska barn i årskurs 5 och 6 presterade på testet, för att ge referensdata för svenska förhållanden och undersöka hur väl de engelska normerna fungerar. Studien studerade även samband mellan avkodningsförmåga, av riktiga ord och non-ord (med testet LäSt) och prestation på Dyscalculia Screener samt samband mellan prestation i skolmatematik och resultat på respektive test. Studien innefattade 66 barn, 36 i årskurs 5 och 30 i årskurs 6. Svenska barns resultat skiljde sig till viss del från de engelska normvärdena. De presterade lägre än normvärdena på deltesten Simple Reaction Time och Multiplication. På Dot Enumeration och Numerical Stroop presterade barnen högre. På Addition låg barnen inom normvärdena. Samband mellan avkodningsförmåga och räkneförmåga kunde påvisas, framförallt för avkodning av non-ord. En skillnad i resultat fanns på deltesten Numerical Stroop, Addition och Multiplication mellan de som uppnådde målen i matematik och de som var tveksamma att uppnå eller inte uppnådde målen. / Dyscalculia (specific mathematics disorder) is one, among other causes of mathematical difficulties. Studies have indicated a correlation between dyscalculia and dyslexia and people with dyscalculia have problems managing school mathematics. Two different theories explain the cause of for dyscalculia: the system theory and the modular theory. Dyscalculia Screener is a screening tool based on the modular theory and that dyscalculia is caused by difficulties in basal number sense and should discriminate people with dyscalculia from those who are bad at mathematics for other reasons. The test includes parts that test reaction time (Simple Reaction Time), number sense (Dot Enumeration and Numerical Stroop) and arithmetics (Addition and Multiplication). This study investigated how Swedish children, in year 5 and 6, scored on the test, to get reference data for Swedish relations and see whether the normes from England could be used. The study also investigated correlations between decoding, of real words and non-words (with the test LäSt) and score on Dyscalculia Screener and correlations between ability to manage school mathematics and score on each test. The study included 66 children, 36 in year 5 and 30 in year 6. Swedish children scored different in some ways from the English norms. They scored lower than the norms on the testparts Simple Reaction Time and Multiplication. On Dot Enumeration and Numerical Stroop they scored higher. On Addition, they scored within the norms. A correlation between decoding and counting ability was found, especially for decoding of non-word. A difference in score was seen on the testparts Numerical Stroop, Addition och Multiplication between children that achieved the goals in mathematics and those who were unsure to achieve them or did not.
|
202 |
Joint Design of Precoders and Decoders for CDMA Multiuser Cooperative NetworksLiu, Jun-tin 07 September 2011 (has links)
In this paper, we consider the code division multiple access of the multiuser cooperative network system, all sources transmit signals using assigned spreading waveforms in first phase, and all relays transmit precoded signals using a common spreading waveform to help send signals to all destinations in second phase, in order to improve the performance. In this paper, we proposed the precoding strategy of relay point and the decoding strategy of destination point; at first we use the zero-forcing to eliminate the multi-user interferen-
ce at the destination, and then joint design of the precoding vector at relay point and the decoding vector at destination point to achieve different optimization objectives. In this paper, we consider the power constraints to optimal the average SNR for the precoding vector and decoding vector, but the precoding vector favors the source-destination pairs with better channel quality in this condition, we also present the design of fairness, joint design of the precoding vector and the decoding vector to make the worst SNR can have the best signal-to-noise ratio after the design, and also consider the power constrain.
|
203 |
Hur tolkar du? : En studie om reklambilder utifrån sändar- och mottagarperspektiv. / How do you interpret? : A study of advertising images based on the transmitter and recipient perspective.Yusuf, Farah, Jaykar, Ida, Emilsson, Isabelle January 2008 (has links)
<p> </p><p> </p><p><p>The study aims to gain a better understanding and explore how a selected group (receivers) perceive two elected advertising images from Indiska and Vila and then compare their opinions to what Indiska and Vila themselves want to communicate. We base the study on theories of encoding / decoding, which deals with how the companies charge their advertising images with values and how recipients decodes these values.</p><p>A qualitative study was carried out based on analytical induction (planning, collection and analysis). Through the analytical method of induction, we created categories based on the collected data and put them in relation to each other. The result showed that the overall impression of the images' and the context are of great importance for how our respondents perceive the advertising images. The overall impressions convey emotion that reinforces the expression and the message. The respondents partially perceived the transmitters' message in the pictures, but some elements did not match the transmitters' intention.</p><p>We also made a reception analysis to find out whether the nursery place of our respondents made a difference in how they interpreted the advertising images. Although we could not reveal any significant difference between these groups, we still believe that social, cultural and economic background matters when it comes to interpretation.</p></p>
|
204 |
Ämnesövergripande undervisning i läsförståelse : Mellanstadielärares kompetens och undervisningsstrategier i olika ämnen / Interdisciplinary teaching in reading comprehension : Teachers’ qualifications and teaching strategies in different subjectsJohansson, Sofia January 2015 (has links)
In this study, six teachers have been interviewed about their vision and teaching of reading comprehension, both for pupils who has cleared the reading code and those who have not. The aim is to illustrate if teachers in middle school spend time to exercise reading comprehension, or if this is left to the Swedish teachers. Thus only according to the subject Swedish, the students are entitled to be given the opportunity to develop reading strategies. The interviews are semi-structured based on qualitative research. The informants are three teachers of Swedish and three teachers of other subjects. Two different interview guides were used containing three questions. The main questions were the same but each guide had some question directly connected to the subject. The results show that all teachers believe that exercising reading comprehension is to be conducted in all subjects, not just Swedish. However, the work is done differently. Teachers in the Swedish subject discuss their teaching in a much more purposeful way than the other teachers. Teachers in the Swedish subject have developed their competence concerning reading comprehension and have got more knowledge than those on other subjects. Those teachers who do not teach Swedish as a subject say that lack of time is the reason why reading comprehension cannot be integrated to the extent that they desire / I den här studien har sex verksamma lärare intervjuats angående deras syn på undervisning av läsförståelse, både när det gäller elever som knäckt läskoden respektive de som inte har det. Syftet är att åskådliggöra om samtliga lärare på mellanstadiet lägger tid på läsförståelseträningen, eller om det är lämnat åt svensklärarna, då det enbart står i ämnet svenska att eleverna ska ges möjlighet att utveckla lässtrategier. Intervjuerna är semi-strukturerade och bygger på en kvalitativ studie. Informanterna är tre lärare i svenska och tre lärare i andra ämnen Två olika intervjuguider användes som innehöll tre frågor, huvudfrågorna användes till samtliga lärare medan någon fråga var direkt riktad till de ämnen lärarna undervisar inom. Resultatet visar att samtliga lärare är eniga om att läsförståelseträning ska bedrivas i alla ämnen och inte bara svenska. Däremot skiljer sig båda kategorierna åt då svensklärarna diskuterar sin undervisning på ett mycket mer målmedvetet sätt än de övriga lärarna
|
205 |
Extracting Spatiotemporal Word and Semantic Representations from Multiscale Neurophysiological Recordings in HumansChan, Alexander Mark 21 June 2014 (has links)
With the recent advent of neuroimaging techniques, the majority of the research studying the neural basis of language processing has focused on the localization of various lexical and semantic functions. Unfortunately, the limited time resolution of functional neuroimaging prevents a detailed analysis of the dynamics involved in word recognition, and the hemodynamic basis of these techniques prevents the study of the underlying neurophysiology. Compounding this problem, current techniques for the analysis of high-dimensional neural data are mainly sensitive to large effects in a small area, preventing a thorough study of the distributed processing involved for representing semantic knowledge. This thesis demonstrates the use of multivariate machine-learning techniques for the study of the neural representation of semantic and speech information in electro/magneto-physiological recordings with high temporal resolution. Support vector machines (SVMs) allow for the decoding of semantic category and word-specific information from non-invasive electroencephalography (EEG) and magnetoenecephalography (MEG) and demonstrate the consistent, but spatially and temporally distributed nature of such information. Moreover, the anteroventral temporal lobe (avTL) may be important for coordinating these distributed representations, as supported by the presence of supramodal category-specific information in intracranial recordings from the avTL as early as 150ms after auditory or visual word presentation. Finally, to study the inputs to this lexico-semantic system, recordings from a high density microelectrode array in anterior superior temporal gyrus (aSTG) are obtained, and the recorded spiking activity demonstrates the presence of single neurons that respond specifically to speech sounds. The successful decoding of word identity from this firing rate information suggests that the aSTG may be involved in the population coding of acousto-phonetic speech information that is likely on the pathway for mapping speech-sounds to meaning in the avTL. The feasibility of extracting semantic and phonological information from multichannel neural recordings using machine learning techniques provides a powerful method for studying language using large datasets and has potential implications for the development of fast and intuitive communication prostheses. / Engineering and Applied Sciences
|
206 |
High-dimensional classification for brain decodingCroteau, Nicole Samantha 26 August 2015 (has links)
Brain decoding involves the determination of a subject’s cognitive state or an associated stimulus from functional neuroimaging data measuring brain activity. In this setting the cognitive state is typically characterized by an element of a finite set, and the neuroimaging data comprise voluminous amounts of spatiotemporal data measuring some aspect of the neural signal. The associated statistical problem is one of classification from high-dimensional data. We explore the use of functional principal component analysis, mutual information networks, and persistent homology for examining the data through exploratory analysis and for constructing features characterizing the neural signal for brain decoding. We review each approach from this perspective, and we incorporate the features into a classifier based on symmetric multinomial logistic regression with elastic net regularization. The approaches are illustrated in an application where the task is to infer from brain activity measured with magnetoencephalography (MEG) the type of video stimulus shown to a subject. / Graduate
|
207 |
Αρχιτεκτονικές VLSI για την αποκωδικοποίηση κωδικών LDPC με εφαρμογή σε ασύρματες ψηφιακές επικοινωνίες / VLSI architectures for LDPC code decoding with application in wireless digital communicationsΓλυκιώτης, Γιάννης 16 May 2007 (has links)
Η διπλωματική εργασία επικεντρώνεται στην αποκωδικοποίηση με τη χρήση LDPC κωδικών. Στα πλαίσιά της, θα μελετηθεί και θα αξιολογηθεί η κωδικοποίηση και η αποκωδικοποίηση LDPC, με συνδυασμένα κριτήρια παρεχόμενης ποιότητας (κριτήρια BER σε διάφορες συνθήκες μετάδοσης) και πολυπλοκότητας υλοποίησης σε υλικό. Μέσω εξομοίωσης, θα εξεταστεί κατά πόσο επηρεάζεται η απόδοση των αποκωδικοποιητών από την αναπαράσταση πεπερασμένου μήκους λέξης, η οποία χρησιμοποιείται για την υλοποίηση της αρχιτεκτονικής τους σε υλικό. Αφού αποφασιστεί το μήκος λέξης, ώστε η απόδοση του αποκωδικοποιητή να προσσεγγίζει τη θεωρητική, θα ακολουθήσει η μελέτη και ο σχεδιασμός της αρχιτεκτονικής του αποκωδικοποιητή, ώστε να ικανοποιεί και άλλα πρακτικά κριτήρια, με έμφαση στην χαμηλή κατανάλωση ενέργειας. Η καινοτομία της διπλωματικής έγκειται στην παρουσίαση ενός νέου κριτηρίου για τον τερματισμό των επαναλήψεων σε αποκωδικοποιητές LDPC. Το προτεινόμενο κριτήριο είναι κατάλληλο για υλοποίηση σε υλικό, και όπως προκύπτει τελικά, μπορεί να αποφέρει σημαντική μείωση στην κατανάλωση ενέργειας των αποκωδικοποιητών. Το κριτήριο ελέγχει αν υπάρχουν «κύκλοι» στην ακολουθία των soft words κατά την αποκωδικοποίηση. Οι «κύκλοι» αυτοί προκύπτουν σε κάποιες περιπτώσεις χαμηλού λόγου σήματος προς θόρυβο, όπου ο αποκωδικοποιητής δε μπορεί να καταλήξει σε αποτέλεσμα, κάτι το οποίο οδηγεί σε ανόφελη κατανάλωση ενέργειας, αφού δε βελτιώνεται το bit error rate, ενώ ο αποκωδικοποιητής συνεχίζει να λειτουργεί. Η προτεινόμενη αρχιτεκτονική τερματίζει τη διαδικασία της αποκωδικοποίησης σε περίπτωση που υπάρχει «κύκλος», επιτρέποντας σημαντική μείωση της κατανάλωσης ενέργειας, η οποία συνοδεύεται από πολύ μικρή μείωση στην απόδοση του αποκωδικοποιητή. Το προτεινόμενο κριτήριο μπορεί να εφαρμοστεί σε οποιαδήποτε υπάρχουσα αρχιτεκτονική για LDPC αποκωδικοποιητές. Συγκεκριμένα, στη διπλωματική αυτή, μελετώνται τα αποτελέσματα της εφαρμογής του κριτηρίου στις Hardware-Sharing και Parallel αρχιτεκτονικές. / This thesis introduces a novel criterion for the termination of iterations in iterative LDPC Code decoders. The proposed criterion is amenable for VLSI implementation, and it is here shown that it can enhance previously reported LDPC Code decoder architectures substantially, by reducing the corresponding power dissipation. The concept of the proposed criterion is the detection of cycles in the sequences of soft words. The soft-word cycles occur in some cases of low signal-to-noise ratios and indicate that the decoder is unable to decide on a codeword, which in turn results in unnecessary power consumption due to iterations that do not improve the bit error rate. The proposed architecture terminates the decoding process when a soft-word occurs, allowing for substantial power savings at a minimal performance penalty. The proposed criterion is applied to Hardware-Sharing and Parallel Decoder architectures.
|
208 |
Iterative Decoding Beyond Belief Propagation of Low-Density Parity-Check CodesPlanjery, Shiva Kumar January 2013 (has links)
The recent renaissance of one particular class of error-correcting codes called low-density parity-check (LDPC) codes has revolutionized the area of communications leading to the so-called field of modern coding theory. At the heart of this theory lies the fact that LDPC codes can be efficiently decoded by an iterative inference algorithm known as belief propagation (BP) which operates on a graphical model of a code. With BP decoding, LDPC codes are able to achieve an exceptionally good error-rate performance as they can asymptotically approach Shannon's capacity. However, LDPC codes under BP decoding suffer from the error floor phenomenon, an abrupt degradation in the error-rate performance of the code in the high signal-to-noise ratio region, which prevents the decoder from achieving very low error-rates. It arises mainly due to the sub-optimality of BP decoding on finite-length loopy graphs. Moreover, the effects of finite precision that stem from hardware realizations of BP decoding can further worsen the error floor phenomenon. Over the past few years, the error floor problem has emerged as one of the most important problems in coding theory with applications now requiring very low error rates and faster processing speeds. Further, addressing the error floor problem while taking finite precision into account in the decoder design has remained a challenge. In this dissertation, we introduce a new paradigm for finite precision iterative decoding of LDPC codes over the binary symmetric channel (BSC). These novel decoders, referred to as finite alphabet iterative decoders (FAIDs), are capable of surpassing the BP in the error floor region at a much lower complexity and memory usage than BP without any compromise in decoding latency. The messages propagated by FAIDs are not quantized probabilities or log-likelihoods, and the variable node update functions do not mimic the BP decoder. Rather, the update functions are simple maps designed to ensure a higher guaranteed error correction capability which improves the error floor performance. We provide a methodology for the design of FAIDs on column-weight-three codes. Using this methodology, we design 3-bit precision FAIDs that can surpass the BP (floating-point) in the error floor region on several column-weight-three codes of practical interest. While the proposed FAIDs are able to outperform the BP decoder with low precision, the analysis of FAIDs still proves to be a difficult issue. Furthermore, their achievable guaranteed error correction capability is still far from what is achievable by the optimal maximum-likelihood (ML) decoding. In order to address these two issues, we propose another novel class of decoders called decimation-enhanced FAIDs for LDPC codes. For this class of decoders, the technique of decimation is incorporated into the variable node update function of FAIDs. Decimation, which involves fixing certain bits of the code to a particular value during decoding, can significantly reduce the number of iterations required to correct a fixed number of errors while maintaining the good performance of a FAID, thereby making such decoders more amenable to analysis. We illustrate this for 3-bit precision FAIDs on column-weight-three codes and provide insights into the analysis of such decoders. We also show how decimation can be used adaptively to further enhance the guaranteed error correction capability of FAIDs that are already good on a given code. The new adaptive decimation scheme proposed has marginally added complexity but can significantly increase the slope of the error floor in the error-rate performance of a particular FAID. On certain high-rate column-weight-three codes of practical interest, we show that adaptive decimation-enhanced FAIDs can achieve a guaranteed error-correction capability that is close to the theoretical limit achieved by ML decoding.
|
209 |
Detection and Decoding for Magnetic Storage SystemsRadhakrishnan, Rathnakumar January 2009 (has links)
The hard-disk storage industry is at a critical time as the current technologies are incapable of achieving densities beyond 500 Gb/in2, which will be reached in a few years. Many radically new storage architectures have been proposed, which along with advanced signal processing algorithms are expected to achieve much higher densities. In this dissertation, various signal processing algorithms are developed to improve the performance of current and next-generation magnetic storage systems.Low-density parity-check (LDPC) error correction codes are known to provide excellent performance in magnetic storage systems and are likely to replace or supplement currently used algebraic codes. Two methods are described to improve their performance in such systems. In the first method, the detector is modified to incorporate auxiliary LDPC parity checks. Using graph theoretical algorithms, a method to incorporate maximum number of such checks for a given complexity is provided. In the second method, a joint detection and decoding algorithm is developed that, unlike all other schemes, operates on the non-binary channel output symbols rather than input bits. Though sub-optimal, it is shown to provide the best known decoding performance for channels with memory more than 1, which are practically the most important.This dissertation also proposes a ternary magnetic recording system from a signal processing perspective. The advantage of this novel scheme is that it is capable of making magnetic transitions with two different but predetermined gradients. By developing optimal signal processing components like receivers, equalizers and detectors for this channel, the equivalence of this system to a two-track/two-head system is determined and its performance is analyzed. Consequently, it is shown that it is preferable to store information using this system, than to store using a binary system with inter-track interference. Finally, this dissertation provides a number of insights into the unique characteristics of heat-assisted magnetic recording (HAMR) and two-dimensional magnetic recording (TDMR) channels. For HAMR channels, the effects of laser spot on transition characteristics and non-linear transition shift are investigated. For TDMR channels, a suitable channel model is developed to investigate the two-dimensional nature of the noise.
|
210 |
Downlink W-CDMA performance analysis and receiver implmentation on SC140 Motorola DSPGhosh, Kaushik 30 September 2004 (has links)
High data rate applications are the trend in today's wireless technology. W-CDMA standard was designed to support such high data rates of up to 3.84 Mcps. The main purpose of this research was to analyze the feasibility of a fixed-point implementation of the W-CDMA downlink receiver algorithm on a general-purpose digital signal processor (StarCore SC140 by Motorola). The very large instruction word architecture of SC140 core is utilized to generate optimal implementation, to meet the real time timing requirements of the algorithm. The other main aim of this work was to study and evaluate the performance of the W-CDMA downlink structure with incorporated space-time transmit diversity. The effect of the channel estimation algorithm used was extensively studied too.
|
Page generated in 0.026 seconds