• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 262
  • 98
  • 48
  • 29
  • 21
  • 11
  • 9
  • 6
  • 3
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 555
  • 101
  • 93
  • 88
  • 79
  • 64
  • 64
  • 63
  • 63
  • 57
  • 49
  • 48
  • 45
  • 42
  • 40
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
221

Att förklara läsförståelse hos förstaklassare : En studie om vilka kognitiva förmågor som förklarar läsförståelse hos barn i årskurs ett / To Explain the Reading Ability of First Graders : A study of the cognitive skills that explain reading comprehension in children during the first year of school

Eriksson, Malin, Moritz, Sara January 2014 (has links)
Fonologisk medvetenhet, ordavkodningsförmåga, bokstavskunskap och arbetsminneskapacitet har visats predicera den tidiga läsförmågan. Syftet med föreliggande studie var att undersöka hur dessa olika kognitiva förmågor tillsammans förklarar läsförståelse hos barn i första klass. Läsförståelse, ordavkodning, fonologisk medvetenhet, bokstavskunskap samt arbetsminne undersöktes hos 36 elever i årskurs ett med normal hörsel och svenska som modersmål. Resultatet visar att ordavkodning och fonologisk medvetenhet tillsammans förklarar 62 % av variansen i läsförståelsen hos deltagarna. Slutsatsen är därmed att ordavkodning och fonologisk medvetenhet tillsammans predicerar läsförståelse i årskurs ett. / Phonological awareness, decoding skills, letter knowledge and working memory capacity predict early reading skills. The aim of the present study was to examine how these different basic cognitive abilities together can explain reading comprehension in children during first year of school. Reading comprehension, decoding, phonological awareness, letter knowledge and working memory were studied in 36 children in first grade with normal hearing and had Swedish as their native language. The results show that decoding and phonological awareness together explain 62 % of the variance in reading comprehension among the participants. The conclusion from the present study is that decoding and phonological awareness predict reading comprehension during first grade.
222

Early-Decision Decoding of LDPC Codes

Blad, Anton January 2009 (has links)
Since their rediscovery in 1995, low-density parity-check (LDPC) codes have received wide-spread attention as practical capacity-approaching code candidates. It has been shown that the class of codes can perform arbitrarily close to the channel capacity, and LDPC codes are also used or suggested for a number of important current and future communication standards. However, the problem of implementing an energy-efficient decoder has not yet been solved. Whereas the decoding algorithm is computationally simple, withuncomplicated arithmetic operations and low accuracy requirements, the random structure and irregularity of a theoretically well-defined code does not easily allow efficient VLSI implementations. Thus the LDPC decoding algorithm can be said to be communication-bound rather than computation-bound. In this thesis, a modification to the sum-product decoding algorithm called early-decision decoding is suggested. The modification is based on the idea that the values of the bits in a block can be decided individually during decoding. As the sum-product decoding algorithm is a soft-decision decoder, a reliability can be defined for each bit. When the reliability of a bit is above a certain threshold, the bit can be removed from the rest of the decoding process, and thus the internal communication associated with the bit can be removed in subsequent iterations. However, with the early decision modification, an increased error probability is associated. Thus, bounds on the achievable performance as well as methods to detect graph inconsistencies resulting from erroneous decisions are presented. Also, a hybrid decoder achieving a negligible performance penalty compared to the sum-product decoder is presented. With the hybrid decoder, the internal communication is reduced with up to 40% for a rate-1/2 code with a length of 1152 bits, whereas increasing the rate allows significantly higher gains. The algorithms have been implemented in a Xilinx Virtex 5 FPGA, and the resulting slice utilization andenergy dissipation have been estimated. However, due to increased logic overhead of the early decision decoder, the slice utilization increases from 14.5% to 21.0%, whereas the logic energy dissipation reduction from 499 pJ to 291 pJ per iteration and bit is offset by the clock distribution power, increased from 141 pJ to 191 pJ per iteration and bit. Still, the early decision decoder shows a net 16% estimated decrease of energy dissipation.
223

”Vem sjutton vill ’gå på’ en reklam?” : En studie om avkodningens betydelse ur ett mottagarperspektiv inom marknadsföringsmetoden celebrity endorsement

Widå, Camilla, Miras Wardell, Sara January 2015 (has links)
Celebrities used in advertising has recently been something that consumers have come in contact with on a daily basis. Corporations use celebrities and their attributes as positive reinforcement and association towards the company or their marketed product. Through increased usage of media, the consumers to this advertising technique have been given an opportunity towards better insight into celebrities’ daily life. People are not only receivers of advertising, they are also media producers for the medias agenda. This makes everyone significant in the distribution of advertising. The area of research is called Celebrity Endorsement. All theories within the area presume that all receivers interpret the message in the same manner. In classic communication research there are theories like habitus, encoding and decoding that are the basis for a receiver perspective. These theories are aimed inter alia at people’s past experience influences how people act in a social structure and the receiver thus detects the communication in different ways. In this study, we have therefore created a deeper understanding of consumer’s perspective in celebrity endorsement for further find links to the theories of classical communication research. In this study we want to create a deeper understanding from the perspective of the receiver of celebrity endorsement. This study is also to find connections to theories within classical communications research. The study will also investigate deeper to understand how people within the same target audience are decoding celebrity and advertising activities. To reach a result we have used a qualitative reception analysis, where focus groups from different target audiences has been used. Within the focus groups discussions of current TV-commercials and fictitious cooperation between celebrities and corporations were used for the participants to express their opinion of celebrities in advertising. The study’s result is summarized in four different perspectives, which emerged from the empirical material. The first perspective has been analyzed from the participants’ earlier experiences. The second perspective deals with the participants’ self-knowledge and differences in their capability to objectively being able to analyze their own, as well as the others capability of decoding the messages. The studies third perspective affects the participants’ confessions. This includes further differences in the participants reasoning and confessing or not confessing to celebrity endorsements impact on them. In the last perspective it is described how participants used irony to make a difficult topic easier to talk about. The result of our study shows that advertising with celebrities are decoded in more ways than are described in the models within celebrity endorsement and that classical communication research should be implemented Keywords: Celebrity endorsement, celebrity, encoding, decoding, habitus, Stuart Hall, Pierre Bourdieu, trademarks, brand
224

Αρχιτεκτονικές για LDPC αποκωδικοποιητές

Διακογιάννης, Αρτέμιος 16 June 2011 (has links)
Ένα από τα βασικά μειονεκτήματα που παρουσιάζει ο σχεδιασμός και η υλοποίηση LDPC αποκωδικοποιητών είναι η μεγάλη πολυπλοκότητα που παρουσιάζεται σε επίπεδο υλικού εξαιτίας της εσωτερικής διασύνδεσης των μονάδων επεξεργασίας δεδομένων.H αρχιτεκτονική που επιτυγχάνει το μέγιστο επίπεδο παραλληλότητας και κατά συνέπεια είναι πολύ αποδοτική όσον αφορά την ταχύτητα αποκωδικοποίησης, δεν χρησιμοποιείται συχνά εξαιτίας της πολυπλοκότητας του υλικού λόγω των πολλαπλών κυκλωμάτων διασύνδεσης που απαιτεί. Στην παρούσα διπλωματική εργασία προτείνεται μια νέα αρχιτεκτονική για το δίκτυο διασύνδεσης ενώ παράλληλα έχει υλοποιηθεί και ένας αλγόριθμος για την αποδοτική τοποθέτηση των επεξεργαστικών μονάδων σε αυτό το δίκτυο. Επίσης έχει μελετηθεί και η επίδραση μειωμένης μετάδοσης πληροφορίας σε κάθε επανάληψη του αλγορίθμου αποκωδικοποίησης.Το περιβάλλον που χρησιμοποιήθηκε για την εξομοίωση και την παραγωγή των αποτελεσμάτων είναι η πλατφόρμα της Matlab. Η προτεινόμενη αρχιτεκτονική υλοποιήθηκε και εξομοιώθηκε σε κώδικες LDPC που αποτελούν μέρος του προτύπου DVB - S2 (Digital Video Broadcasting).Το συγκεκριμένο πρότυπο, εκτός των άλλων, καθορίζει και τις προδιαγραφές των κωδίκων LDPC που χρησιμοποιούνται κατά την κωδικοποίηση και αποκωδικοποίηση δεδομένων σε συστήματα ψηφιακής δορυφορικής μετάδοσης. Τα αποτελέσματα των εξομοιώσεων σχετίζονται με την πολυπλοκότητα της προτεινόμενης αρχιτεκτονικής σε υλικό αλλά και της απόδοσης (ταχύτητα αποκωδικοποίησης) και συγκρίνονται με την βασική πλήρως παράλληλη αρχιτεκτονική. / One of the main disadvantages of the design and implementation of LDPC decoders is the great complexity presented at the hardware level because of the internal interconnection of processing units. The fully parallel architecture that achieves the maximum level of parallelism and hence is very efficient in terms of speed decoding is not used often because of the hardware complexity due to the multiple interface circuits required. This MSc thesis proposes a new architecture for the network interface and also introduces an algorithm for the efficient placement of the processing units in this network. In addition to that, a modified version of the decoding algorithm has been implemented. The relative advantage of this algorithm is that in each iteration only a percentage of the processing units exchange information with each other. That approach further reduces the hardware complexity and power usage. The environment used to simulate and produce the results is Matlab. The proposed architecture is implemented and simulated in LDPC codes that are part of the standard DVB - S2 (Digital Video Broadcasting). This standard, among other things, determines the specifications of the LDPC codes used in the channel encoding and decoding process in digital satellite transmission systems. The results of the simulations related to the complexity of the proposed architecture in hardware and performance (decoding speed) are compared with the fully parallel architecture.
225

The branding of the "new Ukraine" : A media production study of the encoding/decoding of Europeanness during Eurovision Song Contest 2017

Hallgren, Karin January 2018 (has links)
There are several studies observing the phenomenon of nation branding as political pursuits and as texts. However, the media are generally treated as neutral platforms in branding literature. Also, relatively little has been done to explore how the context of branding affects the level of text production, not least in relation to media events. Deploying a cultural approach, the present study suggests that the production of branding may be examined in terms of cultural codes (Hall, 1982) and dominant or preferred meanings (Hall, 1973/1992). The aim of this study is to explore processes of nation branding, as part of media events, from a media production perspective. This is done through observations of the encoding/decoding of the branding narrative of the Europeanness of Eurovision, as formula for a revised Ukrainian identity, in production and backstage processes of the event 2017. The material consists of qualitative interviews with five agents involved in the branding of Ukraine during Eurovision The analysis is based on the theoretical concepts of, firstly, Hall’s (1973/1992) model of encoding/decoding and, secondly, Ytreberg’s (1999) model for the analysis of text production. Hall emphasises the discursive aspects of audiences’ interpretations, but, with reference to Ytreberg’s idea of text production as a result of negotiated interpretations, it is argued that discursive aspects are just as significant for agents in the production process. Three cases are used to illuminate the tensions in the media production of the branding narrative: The encoding/decoding of a branding concept, of the relationship to Russia, and of a Ukrainian Europeanness. The tensions mainly occur between the agents in the professional position in relation to oppositional readings of the dominant code (Hall, 1973/1992). They can be understood as struggles over the preferred meaning (Hall, 1973/1992) of Ukraine’s Europeanness in the branding narrative, which are enacted in the media production. The two main strategies for negotiating the tensions regard the representation of the categories of time and space. However, I propose that the agents in the media production also perform a third strategy in relation to the tensions that arise, the detached strategy of professionalism, based in the frameworks of knowledge (Hall, 1973/1992) that the agents possess.
226

Arquitetura de um decodificador de áudio para o Sistema Brasileiro de Televisão Digital e sua implementação em FPGA

Renner, Adriano January 2011 (has links)
O Sistema Brasileiro de Televisão Digital estabeleceu como padrão de codificação de áudio o algoritmo MPEG-4 Advanced Audio Coding, mais precisamente nos perfis Low Complexity, High Efficiency versão 1 e High Efficiency versão 2. O trabalho apresenta um estudo detalhado sobre o padrão, contendo desde alguns conceitos da psicoacústica como o mascaramento até a metodologia de decodificação do stream codificado, sempre voltado para o mercado do SBTVD. É proposta uma arquitetura em hardware para um decodificador compatível com o padrão MPEG-4 AAC LC. O decodificador é separado em dois grandes blocos mantendo em um deles o banco de filtros, considerado a parte mais custosa em termos de processamento. No bloco restante é realizada a decodificação do espectro, onde ocorre a decodificação dos códigos de Huffman, o segundo ponto crítico do algoritmo em termos de demandas computacionais. Por fim é descrita a implementação da arquitetura proposta em VHDL para prototipação em um FPGA da família Cyclone II da Altera. / MPEG-4 Advanced Audio Coding is the chosen algorithm for the Brazilian Digital Television System (SBTVD), supporting the Low Complexity, High Efficiency version 1 and High Efficiency version 2 profiles. A detailed study of the algorithm is presented, ranging from psychoacoustics concepts like masking to a review of the AAC bitstream decoding process, always keeping in mind the SBTVD. A digital hardware architecture is proposed, in which the algorithm is split in two separate blocks, one of them containing the Filter Bank, considered the most demanding task. The other block is responsible for decoding the coded spectrum, which contains the second most demanding task of the system: the Huffman decoding. In the final part of this work the conversion of the proposed architecture into VHDL modules meant to be prototyped with an Altera Cyclone II FPGA is described.
227

Decodificação de códigos sobre anéis de Galois

Villafranca, Rogério January 2014 (has links)
Orientador: Prof. Dr. Francisco César Polcino Milies / Dissertação (mestrado) - Universidade Federal do ABC, Programa de Pós-Graduação em Matemática , 2014. / Códigos sobre anéis vem sendo estudados desde a década de 70 e hoje sabe-se que alguns códigos não-lineares sobre corpos são imagens de códigos lineares sobre anéis Zpm. Neste trabalho, lidamos com códigos sobre Anéis de Galois, que são uma generalização tanto para corpos finitos quanto para anéis Zpm. Em uma primeira parte dedicada a anéis, definimos anéis de Galois como um caso particular de anéis locais, mostramos a equivalência entre essa definição e a construção clássica desses anéis como extensões de Zpm e apresentamos propriedades importantes. Em seguida, na parte referente à Teoria de Códigos, descrevemos as bases necessárias desse assunto e apresentamos um método permitindo a obtenção de um algoritmo de decodificação para um código sobre um anel de Galois a partir de algoritmos de decodificação para códigos lineares sobre o corpo de resíduos desse anel. / Codes over rings are a research subject since the 70¿s and today is well known that some non-linear codes over fields are images of linear codes over Zpm rings. In this work we deal with codes over Galois rings, which generalize both finite fields and Zpm rings. In a first part concerning ring theory, we define Galois rings as a particular case of local rings, show the equivalence of this definition to the classical construction of such rings as extensions of Zpm and present important properties of these structures. Next, concerning Coding Theory, we describe the basic facts of this subject and present a method that alows us to obtain decoding alogrithms for a linear code over a Galois ring from decoding algorithms for linear codes over the residue field of that ring.
228

Comparison of Feature Selection Methods for Robust Dexterous Decoding of Finger Movements from the Primary Motor Cortex of a Non-human Primate Using Support Vector Machine

January 2015 (has links)
abstract: Robust and stable decoding of neural signals is imperative for implementing a useful neuroprosthesis capable of carrying out dexterous tasks. A nonhuman primate (NHP) was trained to perform combined flexions of the thumb, index and middle fingers in addition to individual flexions and extensions of the same digits. An array of microelectrodes was implanted in the hand area of the motor cortex of the NHP and used to record action potentials during finger movements. A Support Vector Machine (SVM) was used to classify which finger movement the NHP was making based upon action potential firing rates. The effect of four feature selection techniques, Wilcoxon signed-rank test, Relative Importance, Principal Component Analysis, and Mutual Information Maximization was compared based on SVM classification performance. SVM classification was used to examine the functional parameters of (i) efficacy (ii) endurance to simulated failure and (iii) longevity of classification. The effect of using isolated-neuron and multi-unit firing rates was compared as the feature vector supplied to the SVM. The best classification performance was on post-implantation day 36, when using multi-unit firing rates the worst classification accuracy resulted from features selected with Wilcoxon signed-rank test (51.12 ± 0.65%) and the best classification accuracy resulted from Mutual Information Maximization (93.74 ± 0.32%). On this day when using single-unit firing rates, the classification accuracy from the Wilcoxon signed-rank test was 88.85 ± 0.61 % and Mutual Information Maximization was 95.60 ± 0.52% (degrees of freedom =10, level of chance =10%) / Dissertation/Thesis / Masters Thesis Bioengineering 2015
229

On Code Design for Interference Channels

January 2015 (has links)
abstract: There has been a lot of work on the characterization of capacity and achievable rate regions, and rate region outer-bounds for various multi-user channels of interest. Parallel to the developed information theoretic results, practical codes have also been designed for some multi-user channels such as multiple access channels, broadcast channels and relay channels; however, interference channels have not received much attention and only a limited amount of work has been conducted on them. With this motivation, in this dissertation, design of practical and implementable channel codes is studied focusing on multi-user channels with special emphasis on interference channels; in particular, irregular low-density-parity-check codes are exploited for a variety of cases and trellis based codes for short block length designs are performed. Novel code design approaches are first studied for the two-user Gaussian multiple access channel. Exploiting Gaussian mixture approximation, new methods are proposed wherein the optimized codes are shown to improve upon the available designs and off-the-shelf point-to-point codes applied to the multiple access channel scenario. The code design is then examined for the two-user Gaussian interference channel implementing the Han-Kobayashi encoding and decoding strategy. Compared with the point-to-point codes, the newly designed codes consistently offer better performance. Parallel to this work, code design is explored for the discrete memoryless interference channels wherein the channel inputs and outputs are taken from a finite alphabet and it is demonstrated that the designed codes are superior to the single user codes used with time sharing. Finally, the code design principles are also investigated for the two-user Gaussian interference channel employing trellis-based codes with short block lengths for the case of strong and mixed interference levels. / Dissertation/Thesis / Doctoral Dissertation Electrical Engineering 2015
230

Armazenamento e reconstrução de imagens comprimidas via codificação e decodificação /

Travassos, Natalia Caroline Lopes January 2016 (has links)
Orientador: Francisco Villarreal Alvarado / Resumo: Este trabalho apresenta um algoritmo de codificação para imagens comprimidas que representacada pixel de uma imagem e suas coordenadas por um único valor. Para cadapixel e suas coordenadas, esse valor único é armazenado em um vetor que é usado nareconstrução da imagem sem que sua qualidade seja comprometida. O método propostoapresenta melhorias em relação a dois outros algoritmos propostos anteriormente, sendoque um deles já é uma melhoria do primeiro. O algoritmo apresentado neste trabalho diferedos outros dois algoritmos estudados na diminuição significativa do espaço necessáriopara armazenamento das imagens, na determinação de uma taxa de compressão exata ena redução do tempo de processamento de decodificação. Um outro avanço apresentadofoi a compressão de imagens coloridas utilizando a ferramenta wavemenu em conjuntocom o algoritmo que determina a taxa de compressão. / Mestre

Page generated in 0.0641 seconds